Public Data Sets at the National Center for Education Statistics

Searching for Data

By Jill Barshay and Mikhail Zinshteyn

At first blush, the myriad offerings at the National Center for Education Statistics (NCES) can seem overwhelming. To get started, consider its Quick Tables feature, which is a search tool allowing users to locate all tables, figures and charts that have been published in the tougher-to-peruse data anthologies. Journalists can search by keywords, years of data or topic area.

Education reporters should also become familiar with The Digest of Education’s pulldown menus, which offer tables on nearly every big-picture snapshot of education at the K-12 and postsecondary levels. (Think free and reduced-price lunches, average teacher pay, number of degree-granting institutions, etc.)

NCES offers various data sets that capture detailed information on different age and demographic groups. In some instances, the data sets account for every person under a certain category, known as a universal survey. Examples of universal surveys include the Common Core of Data, which collects information on every school in the United States, detailing the student and teacher populations, funding sources, and other important figures.

On the higher education side, the Integrated Postsecondary Education Data System collects a bevy of information about every college and university that is eligible to receive federal funds.

A second category of federal data sets are nationally representative surveys. These less-familiar data surveys contain an impressive cache of information. The National Household Education Surveys Program (NHES), for example, tracked a representative sample of children from birth to age 12 and their guardians, recording parent and family involvement in education, participation in early childhood programs, activities before and after school, and adult education trends.

Another form of NCES data is the longitudinal survey, a research approach that follows the same group of people over a period of time.

The next section provides a short primer on one popular longitudinal survey, the High School Longitudinal Study.

Spotlight on the High School Longitudinal Study

The High School Longitudinal Study (HSLS:09) started tracking more than 20,000 ninth graders from across the nation — the high school class of 2013 – beginning in 2009. The intent is to see how this generation does in college and in work until age 30.

Students, parents, teachers and school administrators are asked a broad range of questions at different points in time. For the first time, school counselors were surveyed too, so reporters will get details about the college application process. One of the main goals of this newest high school longitudinal study is to understand when and how students decide to go into the fields collectively known as STEM (science, technology, engineering and math).

In addition to national numbers, the survey provides details for these states: California, Florida, Georgia, Michigan, North Carolina, Ohio, Pennsylvania, Tennessee, Texas and Washington.

Two important reports based on the longitudinal study are expected during the summer of 2015. One is the High School Transcript Study, which should allow reporters to see all the courses that the students took in high school and drill down on course-taking differences by gender, ethnicity/race and income.

Another is the brief update on student activities after high school. Reporters will have information on where students applied to college, where they were accepted and where they planned to go in the fall of 2013. We’ll also learn about their intended majors and financial aid. For students who did not intend to go to college, we’ll learn where they were working, how much they were making, why they chose not going to college and what their future educational and career aspirations are.

To get started on data you can use from the High School Longitudinal Study, check out these “first look” studies and “data point” reports, which are narrative summaries that accompany new waves of information related to the study:

Fuss-Free Data Tools

Unfortunately, some of NCES’ most robust data collection projects require knowledge of advanced statistical software (including many of the longitudinal studies highlighted above). But the Education Department in recent years has pursued several efforts to boost the user friendliness of its offerings. Chief among those are the National Assessment for Educational Progress (NAEP) data products and the PowerStats and QuickStats tools. (Jump to the header below to learn more about the grab bag of goods in NAEP.)

PowerStats and QuickStats allow the user to create charts and tables from a list of hundreds of variables. The catch? The tools only support a limited number of data sets concerning students with disabilities in early educationK-12 teachers and personnel, and higher education, though more are underway. To begin exploring the data, sign up with a username and password — it’s free — and start tinkering! The folks at NCES even created an instructional video on how to get started.

NCES maintains dozens of data projects. Check out this brochure for a list of them all.

The journalists at Data at Your Desk received training for two other longitudinal data sets: the Early Childhood Educational Longitudinal Study (ECLS) and the Education Longitudinal Study of 2002 (ELS).

To review what’s available through these research projects, explore the reports below.

ECLS

ELS

Some data points might seem stale, dating back half a decade. This should not discourage reporters from studying or citing the data, though. In some cases, the figures from 2011 or 2009 are the most recent on record. In others, the figures serve as useful points of comparison to earlier periods in U.S. education, items the reporter can draw on to note changes in trends over time.

“Not all data sets are created equal,” as Jack Buckley, the former head of NCES and now a senior official at the College Board, pointed out. NCES data is slow in coming because it’s thorough and tends to catch errors in state reports that were filed faster.

NCES data is also typically more reliable than most think-tank data, Buckley said. Think tanks often place a premium on pumping out information quickly to affect current policy debates, he noted.

The National Assessment of Educational Progress (NAEP) — A Reporter’s Friend

By Jill Barshay

The National Assessment of Educational Progress (NAEP) is the closest thing we have to an exam in the United States that can be used to compare academic performance among student groups in different states. You can also compare assessment scores among different ethnicities, races and income groups.

NAEP can answer these and many more questions. There’s even data on private school students. Journalists can quickly do the data analysis in their cubicles.

I can’t emphasize enough how useful NAEP data can be in feature and investigative reporting. It’s not just for reporting test results every two years and noting where each state ranks. Any time a state or local education official is boasting about how much students are improving, journalists can see if the claims are reflected in NAEP scores.

A handy way to look at the data is through the NAEP State Comparisons tool. Reporters can quickly see, for example, where the biggest achievement gaps are between rich and poor students in the country.

In seven simple clicks, and less than two minutes of time, I could see the difference in math scores between low-income and high-income eighth-graders in each state or tested territory of the nation. Washington, D.C., Connecticut and Massachusetts, it turns out, have the largest achievement gaps between rich and poor.

With one more click, I could see that Massachusetts also produces the highest low-income test scores in the nation. The state that does the best job educating low-income students also has one of the greatest gaps between rich and poor.

By contrast, the District of Columbia is near the bottom of the nation — second to lastcompared with states — when it comes to educating low-income children.

Yet one more simple click produced a slick, color-coded map of the United States. It’s a visual representation of the data, painting states in a “cautionary” orange that have larger than average achievement gaps. (Reporters can even grab the HTML code and publish the gorgeous map right on their websites).

Journalists may also use this data tool to see which states are doing the best job in closing achievement gaps over time. By including both the 2011 and 2013 test results, you can see that both Maryland and Wisconsin have narrowed the rich-poor gap. The outcomes are particularly impressive in Wisconsin, where low-income students are also scoring above the national average. By contrast, the rich-poor gap is getting worse in Massachusetts. The Bay State is near the top of the list of states that are seeing a growing achievement gap between rich and poor.

Education scribes can’t answer every question through the State Comparisons tool. Sometimes, the NAEP Data Explorer is in order. Say you want to see changes in the number of students who are college-ready readers, roughly analogous to “proficient” in NAEP-speak. The NAEP Data Explorer is slightly more complicated to use, but here’s a handy, step-by-step guide with some examples worked out. (Because the NAEP Data Explorer provides data going back nearly two decades, be sure to read the fine print below each table as some years are not fair game for comparison due to changes in the assessments.)

All the information above applies to a growing number of school districts, too. The Trial Urban District Assessment (TUDA) captures the NAEP performance data of more than 20 local school systems. To get started exploring TUDA, head here.

Reporting Data Tips

By Mikhail Zinshteyn

When is a change in test scores significant? Just because a report indicates average scores moved up or down over time does not necessarily mean student achievement actually changed. NAEP scores can rise by six points and show no significance, as seen in the demonstration below, provided to us by Data at Your Desk presenters Emmanuel Sikali of NCES and Jason Nicholas of the research group Westat.

Statistical significance plays a major role in how journalists cover national or state-level achievement rankings. Jack Buckley, the former chief of NCES and the current head of research at the College Board, said recently to this reporter that “it’s not that the rankings are meaningless, it’s just a lot of them are [statistically] tied. Those who look at the absolute rankings are often ignoring the fact that big chunks of the [listed countries or states] are indistinguishable from each other in terms of ties in the rankings.”

In other words, while a country may rank in the middle of the pack on an international assessment, it was in a statistical tie with 10 other countries.

Buckley, who spoke at Data at Your Desk, told the journalist participants that while the mantra “correlation is not causation” is by now a cliche, he still sees many statements in the news media that draw spurious connections. There’s even a website that tracks lousy causal claims, called Spurious Correlations.

 

Nor is it the case that statistically significant changes in academic performance are important to the actual learning of a student. In an interview for an earlier article about the conflicting data on whether U.S. students are improving, Buckley told EWA that when comparing average results between countries or states on national assessments like NAEP or Program for International Student Assessment (PISA), statistically significant differences don’t translate to differences in the classroom. ”Does one scale point mean a lot in terms of education? In many cases the answer is no,” Buckley said.

One more caution on data notation is to be highly skeptical of any figure that’s followed by an exclamation mark, said Data at Your Desk presenters Daniel Potter of the American Institutes for Research and Chandra Muller of the University of Texas at Austin. These tend to be unstable estimates, often because the sample sizes are too small or the error margins too big.

Buckley shared additional wisdom on how to organize one’s data sleuthing for a story. “The very easiest things to use are other people’s statistics,” he said, cautioning against wading through the muck of raw data files to arrive at original conclusions, for most stories at least.

There are exceptions, though. Buckley cited the 2009 series published by the Atlanta Journal-Constitution on standardized test cheating, which he called the most important data story of the decade. One reason is because the reporters and data consultants they worked with did not draw conclusions from the raw data they collected. The main driver of the research was what Buckley calls a descriptive statistic — figures that illustrate interesting trends. In this case, the AJC team noticed that in certain classrooms student standardized test scores grew by highly improbable percentages. But the AJC reporters did not treat that finding as the smoking gun. Instead, the team dug deeper to find out what could explain the exceptional jumps in student performance.

Buckley advises journalists to consider the questions they hope to answer in their reporting to better structure their stories. If it’s an inference the journalist is after, here are key terms to learn that regularly appear in the introductions to reports:

  • Descriptive — “U.S. grade eight math achievement gaps are narrowing”
  • Predictive — “Students who score above X are likely ready for college/career”
  • Causal — “Head Start boosts early student learning.”

Digging for data is not hard, but it requires a familiarity with the tools made available by trusted sources. The data sets highlighted in this story lab are of varying complexity, but this guide provides numerous links to summary reports that should answer most of the data queries journalists covering education pose.

Remember that NCES and NAEP are your friends. So are the statisticians and researchers affiliated with federally funded reports. These data mavens and outside scholars are excellent sound boards for data projects in the gestation stage and can recommend methods or resources to help the fledgling story or series along.

While using data has limits, it’s still an informed source that you rely on for your reporting, the well-placed employee at a school district or university whose insights need additional corroboration.

Who wouldn’t want that kind of intel from the inside?

x
Latest
Podcast
badge-arrow
Podcast
Donate