Back to Skills

Using Polls in Education Reporting

Polling isn’t exclusively the province of political reporters.

Back to Skills

Polling isn’t exclusively the province of political reporters. A handful of national surveys released each year focus on education, including the Phi Delta Kappa/Gallup poll about public attitudes toward education and MetLife’s annual survey of teachers. There’s also often polling done for statewide education-related elections, such as ballot measures or state superintendent races, and, periodically, by news outlets and advocacy organizations on various education-related issues.

Download this guide

Newly released polling data can make for illuminating stories about where the public stands on key education issues and can be a valuable resource for future story ideas and contextual information. Not all polling data are equal, though. While you don’t need a statistics degree to figure out which polls are worth your time, you do need to approach a new poll with the same skepticism you bring to the rest of your reporting. Think of it like an interview – if there are questions the poll tries to gloss over or doesn’t answer, it could be hiding something.

How to “interview” a poll

The first question you should ask of any poll is who paid for it and who conducted it; the two often aren’t the same. If it’s coming from a major news outlet, government agency, nonpartisan polling organization, such as Gallup or the Pew Research Center, or academic institution, it’s probably reliable. However, often advocacy groups or partisan organizations will sponsor polls, in which case you should proceed with caution. Partisan polls can still be useful, but be more skeptical about how they framed the results and how they arrived at them.

Polling Resources

American Association of Public Opinion Research
AAPOR’s Media Room has a number of handy guides on interpreting polling data, as well as information on how best to get in touch with one of the association’s polling experts.

NORC at University of Chicago
The center formerly known as the National Opinion Research Center has a partnership with the Associated Press and is a good source for experts and data. The center has education experts, too.

Pollingreport.com
While its education page isn’t always the most up-to-date, this site is a good, free resource for determining how public opinion has shifted on key issues over time.

“Understanding and Interpreting Polls” Course
This free, three-hour online course produced by AAPOR and Poynter takes you beyond the core concepts covered in this guide and can help transform you into the polling guru in your newsroom.

That takes us to the methodology section of a poll, which should typically be the next part you scrutinize. Whom the pollsters talked with, how they talked with them and when talked with them are all crucial details in determining how trustworthy the findings are and whether they can be applied broadly. Every poll, big or small, uses the responses of a portion of a group to represent the attitudes of the entire group. That power to extrapolate relies on the size of the sample and the way respondents were selected. Without a sample that’s been arrived at randomly – the computerized equivalent of drawing names from a hat – or is sufficiently large to encompass a range of different attitudes or responses, that sample can’t be trusted to represent the attitudes of the entire group. That group could be the entire country, or it could be a subgroup, such as teachers or parents with school-age children. How the pollsters find respondents who fit that description – typically they purchase databases drawn from public records – and how they select which of those members to call will determine the power of the poll. The larger the sample size and the more random the selection of respondents, the better. Poll results will also typically be weighted slightly to account for misrepresentation of different groups, such as harder-to-reach minorities or cellphone users. Generally little information on this process is released, but make sure that you have at least some idea of how that weighting occurred.

The most reliable national polls are typically based on interviews with at least 1,000 respondents, representing a nearly accurate natural spread of different ages, ethnicities, genders and geographic regions. The fewer respondents, the greater the margin of error, which means the less authoritatively you can report many of the responses. While polls that seek to represent the attitudes of a smaller population, such as a state or city, don’t need as many respondents to be accurate, you’ll still want to make sure you know the margin of error. A margin of error up to 4 percentage points in either direction is typically trustworthy – anything higher is worth asking about. Bear in mind that the margin of error for subgroups within a poll – say teachers of a particular ethnicity, for example – is typically higher than for the entire group, and that margin of error for those subgroups isn’t usually included in the public release of data.

Look who’s talking

It’s just as important to know how the interviews were conducted. The gold standard has traditionally been live telephone calls where demographic details about the respondent can be verified. That can be expensive, though, because many surveys make multiple attempts to reach respondents. The work is further complicated by the increasing number of Americans – more than one-third of the population – who rely exclusively on a cell phone. By law cellphones must be dialed by hand, rather than the automated dialing systems many pollsters use for landlines to speed up the process, and cellphone users also tend to have lower response rates. These characteristics make dialing cellphones expensive and cumbersome, but nevertheless essential to an accurate and representative poll. Polls that don’t make any attempt to contact any respondents via cellphone (or contact too small a percentage) are likely to underrepresent younger respondents and minorities, a common issue that led to inaccurate results in 2012 election polling. Pay attention, too, to when those calls were made. If the answers were gathered weeks ago, they might not represent the freshest public opinion on a subject.

In part because of the expense of live telephone polls, an increasing number of outfits conduct polls online or through automated phone calls. Both have caveats. It’s harder to verify the identity of respondents on automated phone calls, and these polls are even less likely to include cellphone respondents. Automated polls are also much shorter, allowing for little time to ask probing or interesting questions.

Online polls are often even less representative of the population as a whole. Many internet panels rely on serial poll respondents who earn cash for their participation, so their participation is hardly random. If respondents (perhaps members of a professional organization) are contacted at random to take an online survey, those results can be extrapolated to a larger population. You’ll need to pay close attention to the methodology. Online polls and automated polls are met with skepticism by many in the polling industry but online polls, at least, have proven reliable in some cases. In fact, polling guru Nate Silver found that online polls did a slightly better job than traditional live telephone polls in predicting the 2012 presidential election.

Finding the story

Once you’ve figured out how the survey was conducted, it’s time to head to the results themselves. Many polls will release a “topline” document, which shows the wording and order of questions and results of all the respondents. Pay particular attention to questions that seem to be leading, unnecessarily complex or use loaded language. Any of the three could be indications that the pollsters are fishing for a particular response. Order matters, too. A question that mentions a school shooting, for example, could influence responses to a later question about school security. Some polls, for example the most recent MetLife teacher satisfaction poll, rely on focus groups to help influence what to ask and how to ask it. Focus groups are essentially moderated discussions with targeted groups who have useful opinions pertaining to the topic, or influential people. The results aren’t typically made public, but it’s worth noting if a focus group has been used in the formulation of a poll.

One thing to keep in mind is that pollsters don’t always include all of the questions they asked in the topline document. If you’re suspicious that the results seem too heavily to favor the position of the group sponsoring the poll, it’s worth asking whether anything has been left out.

There’s no substitute for looking at the actual questions and responses in the topline document. The most interesting finding might be what is highlighted in the press release and accompanying materials, but just as often you’ll find the best takeaways by combing through the results yourself. If the poll is about a political race or ballot measure, the likelihood of victory is most important, but attitudinal questions related to other key issues on the beat could add complexity to your story or be a key driver of future coverage. Think about how the results fit into the narrative of key issues on your beat. How do the attitudes of the public compare to statements by school officials, politicians or teachers’ unions? It can pay to look at the responses of key constituencies within the overall sample, though bear in mind that their results are likely to have a higher margin of error than the overall group. Pay close attention to series of questions that measure a fuller picture of attitudes on an issue or policy. For example: not just whether people favor or oppose Common Core, but what particular pieces of the policy are favorable, what parts do they want to see left out.

Looking outside the poll

While this guide lays out some of the key elements to look for in a poll, if you’d like some additional help in interpreting the results – or confirming that the methodology is sound – the American Association of Public Opinion Research has a team of public opinion experts ready to field your questions on deadline. Additionally, local and national academics in political science or an education department could also help with vetting the methodology, or help put the findings in context.

Broadly speaking, that’s your next task after vetting a poll and identifying the findings that seem most interesting: What’s the context? Try to identify whether other recent polls have tackled similar issues, and what they’ve found. Look to historical polls to see whether attitudes about certain issues appear to be shifting. That data can be easier to find for polls like the annual PDK/Gallup poll, where earlier results are made available by the poll’s sponsor. There are other resources you can consult including pollingreport.com, a free archive of publicly released polls, and the expensive, but more comprehensive, Roper Center Public Opinion Archives. Bear in mind that prior polls may have been conducted differently, so some “change in opinion” may in fact be due to differences in wording or methodology.

Of course, as is the case with any data story, you’ll want to find faces for whatever story the polling data reveal. Academics can provide context and comment on the poll, but you’ll want to find people whose experiences bear out what the data show. If your news organization has sponsored the poll, you can likely ask the polling company you use to provide you with names and numbers of actual respondents to the survey who wouldn’t mind speaking with a reporter for more in-depth information.

You also won’t want to get too bogged down in all of the numbers – focus on a few key findings, rather than just a laundry list of the results. Try to avoid slipping into the language of pollsters, too: say adults, or teachers, for example, not respondents. At the same time, make sure you don’t overstate the findings. Bear in mind the margin of error – and make sure that responses that fall near the margin of error aren’t stated as strong preferences in either direction. The Associated Press’s rule of thumb is not to report a lead in a political poll unless one candidate leads the other by more than twice the margin of error. Differences in opinion greater than the margin of error but not double it can be described as slight preferences.

Finally, make sure that you include information on who sponsored the poll, how it was conducted, the population surveyed (e.g., teachers), how many people responded, the margin of error, and when the poll was conducted. If you’ve found any reason to doubt any of the findings – funky questions, say, or a tremendous deviation from past opinion on the subject –make sure to point that out.

x
Latest
Podcast
badge-arrow
Podcast
Donate