Greater Greater Education

Data, data all around, but not the test data we need

We appear to be awash in school performance data. But the data we have in DC can't answer some crucial questions about how much students are actually learning.


Photo by Alberto G. on Flickr.

Teachers who want to know how much they have helped their students learn can't tell that from looking at DC's standardized test scores. But other states are using their test data to help teachers answer that question. The secret, testing experts say, is reporting how much each student has grown academically rather than just whether she's achieved a measure of grade-level "proficiency."

Questions that DC testing data can't answer

  1. How much did a student grow from one year to the next?
  2. It may be surprising to many, but DC test data cannot answer this question. For teachers like Dunbar High School math teacher David Tansey, that makes the data far less useful.

    "If all I know is that a student isn't on grade level, that doesn't give me much information, " he says.

    DC only reports the proficiency levelsBelow Basic, Basic, Proficient and Advancedfor students and groups of students. Teachers are told how close students are to each level.

    Tansey says he would like to know how much each of his students have grown academically from one year to the next. But without growth metrics, he only knows if they've moved from one proficiency level to the next. If they've grown but haven't changed levels, he has no way of capturing that information.

  3. How much value does a school add to its students' growth above what parents provide?
  4. Many parents feel that they invest a lot in their kids outside of school. They would like to see which schools add the most value on top of what they contribute.

    But no DC test score can tell them that right now. A particular type of growth metric, known as value added, does measure precisely this factor. With value added growth scores, parents could see if a school is good at improving the academic achievement of a child like theirs.

  5. Was a particular student's growth enough for him to move toward, or cross into, proficiency?
  6. Colorado testing experts Dr Jody Ernst and Richard Wenning say a central question is "How much growth is enough?" The Denver public school system has found an answer. It's actually adopted the accountability framework developed by Denver's charter authorizer because of its sophisticated growth metrics.

    Denver's School Performance Framework, or SPF, reports 7 different growth metrics for schools, including whether they are advancing lagging students at a pace fast enough to be proficient before graduation.

  7. Is teaching strategy A more effective at advancing lagging students than strategy B? Is teaching strategy C more effective at growing advanced students than strategy D?
  8. Dunbar's Tansey says he can't use DC test scores for feedback because they only provide precise measurements for students who are on grade level. He says that DC's test reporting gives him no credit for advancing a 10th-grader from a 6th-grade to a 9th-grade math level, and it doesn't even give him feedback on whether he's succeeded in doing this.
The secret is the vertical range of tests

That's because DC's current standardized test, the DC CAS, only assesses whether students are on a certain level. Testing experts refer to this as the vertical range of the test. In the case of DC CAS, its vertical range is a single grade level.

The Office for the State Superintendent of Education (OSSE), which oversees standardized testing in DC, has been planning to adopt tests from the PARCC consortium to assess students against Common Core standards next year. PARCC's vertical range is also a single grade level.

But many states are using tests from the other Common Core testing consortium, Smarter Balanced, which has a vertical range of three grade levels. Smarter Balanced tests can have a greater range because they are given on computers that adapt the test to the student's level of ability. The answer to one question determines the difficulty of the next question posed.

OSSE officials had intended to hold "a series of stakeholder discussions" on which test to use before making a decision this month, but according to an account of a meeting reported on Greater Greater Education, they decided not to do so after hearing opposition from DCPS. Some charter operators, such as KIPP and Friendship PCS, have said they prefer Smarter Balanced.

Dunbar's Tansey says he is going to testify at tonight's meeting of the DC Board of Education, which includes a discussion of PARCC on its agenda. The meeting is at 6:30 pm at the John A. Wilson Building, 1350 Pennsylvania Ave. NW.

Ken Archer is CTO of a software firm in Tysons Corner. He commutes to Tysons by bus from his home in Georgetown, where he lives with his wife and son. Ken completed a Masters degree in Philosophy from The Catholic University of America. 

Comments

There are a bunch of implicit assumptions in this article that I don't believe are true.

1) Standardized tests capture a good or holistic view of what a student is learning.

2) Testing is cost-free. i.e. it doesn't impede the regular learning process or distort it.

3) High Stakes testing like this won't create strong incentives to game the system for teachers and administrators.

4) The existing test sets being created are particularly high quality or developmentally appropriate.

by Ben on Mar 20, 2014 2:40 pm • linkreport

Add a Comment

Name: (will be displayed on the comments page)

Email: (must be your real address, but will be kept private)

URL: (optional, will be displayed)

Your comment:

By submitting a comment, you agree to abide by our comment policy.
Notify me of followup comments via email. (You can also subscribe without commenting.)
Save my name and email address on this computer so I don't have to enter it next time, and so I don't have to answer the anti-spam map challenge question in the future.

or