When the definitive history of the 2009 healthcare reform debate is written, one footnote will read how varied, even contradictory, the polls had been. We see this now. Indeed, on any given day, different people can cite different polls and come to very different conclusions. “Americans are in favor of healthcare reform—no, wait, they are against it!”
It goes without saying that given this uncertainty, cherry picking of polls has been rife on both the right and the left. Democrats prefer to cite polls on the “public option” which has consistently shown strong majority support. Republicans, on the other hand, point to polls on general support for healthcare reform—most showing only a plurality.
At a methodological level, pollsters have been grappling with this dilemma as well. The original debate centered around the variability of question wording and its effect on levels of support. The overriding question was—what is the ideal healthcare question, if even such a thing exists?
More recently, the debate has shifted to explaining the differences between generic healthcare questions and more specific ones referring to the “public option”. The controversy lies in the differential levels of support—generic questions have shown only plurality support, while specific questions referring to the “public option”, show majority support. The consensus explanation is that the healthcare debate is quite distant from people’s day-to-day lives and so their answers are “uninformed”—in methodological speak, a classic case of “non-attitudes.”
Both lines of reasoning have their merit. However, we believe that they miss the mark because they assume that polling on healthcare reform is analogous to polling on presidential elections. In our opinion, it isn’t.
Indeed, in presidential elections, our job as pollsters is made easy with ballot questions being basically fixed after the primaries. Simply put, we know which candidates will be running. This, in turn, all but defines our ballot question for us.
In contrast, issues like healthcare reform are quite fuzzy as no bill typically exists at the beginning of the process. This makes the construction of a single question impossible if not simply disingenuous.
Put another way, we have no “true value” to measure against— no concrete bill exists (or at least did not exist until recently). You can’t measure what doesn’t exist!
The problem is most apparent when looking at generic questions on healthcare . Such questions are broadly worded and lack any concrete anchor. People, consequently, can (and do) read into them what they want, making their meaning variable. To illustrate our point let’s look at table 1 below.
The above question shows that only a plurality (34%) of Americans support healthcare reform (or at least the proposals in Congress). Simple Conclusion: Americans do not support healthcare reform.
However, a simple follow up question shows that about a quarter (25%) of those that oppose the reform bills actually think the proposals “do not go far enough” (see table 2 above)! This same 25% actually is much more likely to be Democrat and more likely to support the public option. People, once again, read into the question what they want.
In contrast, questions which refer to “the public option” and other specific policy measures can introduce greater certainty into the ballot question, helping to establish a clear reference point for people (See table 3 below). However, once again, such questions are nothing more than hypothetical as we do not know a priori which items will (and will not) be included.
So what are our takeaways here? What does polling on American healthcare reform teach us about polling on non-electoral policy issues involving the legislative process?
First, polling on healthcare reform is quite different than polling on presidential elections because our “true value” is not fixed. This makes the construction of single questions impossible and misleading. Such issues are, well, fuzzy and, therefore, only a multiple indicators approach will tell the entire story—some generic, some specific questions. Here triangulation is key.
Second, generic questions should be used with caution. At the least, they should include a follow up question in order to determine why people favor or oppose healthcare reform. We only included such a follow up after struggling with interpreting the results.
Are such generic questions valid at all? We think they are but with caveats.
Indeed, before the final bill, such questions seem to be nothing more than a measure of optimism about the reform process, much like “right track, wrong track” questions. Looking forward to a final bill, we do expect that such generic questions will become relevant. Only then will they have a “true value” to be measured against.
Third, questions which reference specifics like the “public option” are hypothetical and have to be understood as such. Indeed, without a final bill, they should be used more for sensitivity analysis than anything predictive—which policy measures garner more support, which ones less so. While such questions say nothing about “general support for healthcare reform,” they do help us understand which measures are more (and less) likely to be in the final bill as politicians read polls too.
To this end, we have tracked specific items for most of the healthcare debate. Here we understood that healthcare reform would be fundamentally a debate about the role of government (or lack thereof). All of our items fall along a government intervention continuum. In our experience, polling on “fuzzy” issues places a premium on understanding the underlying value cleavages related to the policy debate at hand. At its essence, healthcare reform is a debate about the proper role of government.
Fourth, from an analytical perspective, the combination of generic and specific (hypothetical) questions makes total sense. They allow us to be both predictive as well as diagnostic with our clients but only make sense when used together.
Fifth, from a media polling perspective, the combination of general and specific ballot questions is much less tidy than a single “up or down” measure and, thus, more complicated to explain. Looking forward to future non-electoral legislative reform debates, we, as an industry, need to do better in explaining these complexities.
For more information on this news release, please contact:
Senior Vice President
Ipsos Public Affairs
Ipsos Public Affairs
About Ipsos Public Affairs
Ipsos Public Affairs is a non-partisan, objective, survey-based research practice made up of seasoned professionals. We conduct strategic research initiatives for a diverse number of American and international organizations, based not only on public opinion research, but elite stakeholder, corporate, and media opinion research.
Ipsos has media partnerships with the most prestigious news organizations around the world. Ipsos Public Affairs is the polling agency of record for The McClatchy Company, the third-largest newspaper company in the United States and the international polling agency of record for Thomson Reuters, the world’s leading source of intelligent information for businesses and professionals.
Ipsos Public Affairs is a member of the Ipsos Group, a leading global survey-basedmarket research company. We provide boutique-style customer service and work closely with our clients, while also undertaking global research.
In 2008, Ipsos generated global revenues of €979.3 million ($1.34 billion U.S.).
Visit www.ipsos.com to learn more about Ipsos offerings and capabilities.
Ipsos, listed on the Eurolist of Euronext – Comp B, is part of SBF 120 and the Mid-100 Index, adheres to the Next Prime segment and is eligible to the Deferred Settlement System. Isin FR0000073298, Reuters ISOS.PA, Bloomberg IPS:FP