Greater Greater Education

Here's the DC school ranking you should be looking at

People make a lot of decisions based on school test scores. Parents select schools for their children. Administrators fire principals and close schools. But few realize that they are using the wrong scores to make these important decisions.


Photo by Gilpin2010 on Flickr.

Most people use the "percent proficiency" score, which measures what percentage of the students in a school are proficient in math or reading using the DC CAS test. They should also be using the Median Growth Percentile (MGP) which is based on the CAS scores. The Office of the State Superintendent of Education (OSSE) calculates MGP, but few people look at it.

That can cause problems. Recently the Public Charter School Board actually closed one of the top performing schools in the city based on MGP, because its percent proficiency scores were too low. Where does your local school rank based on MGP scores? See below.

Proficiency measures demographics, MGP measures value added by a school

Most parents compare schools by looking at the percentage of students in the school who tested proficient in math and reading. It's understandable why they do that, as DC Public Schools prominently displays this information on their web site.


DCPS School Profiles web page for Deal Middle School.

The DCPS web site shows that 85% of students tested proficient in math and 82% tested proficient in reading. Deal ranks as having the 10th highest test scores in the city, the highest of any middle school.

But what does that mean? How much of this comes from actual great teaching at Deal, and how much from the fact that Deal draws from some of the most affluent parts of the District?

What really matters is how much a school helps its students advance. Percent proficiency doesn't tell you this on its own. For example, Janney Elementary feeds into Deal Middle School. 90% of students at Janney test proficient in math and 93% test proficient in reading. Does that mean Janney students are going downhill when they attend Deal? Probably not.

What would happen if a housing development opened next door to Deal, and the city attached a sizable affordable housing requirement to the development which drew some families with middle schoolers who were previously farther behind? Deal's proficiency percentages would go down the following year. Does that mean that the quality of instruction got worse at Deal? Probably not that, either.

These scenarios illustrate the problem with using static data to try to understand the quality of instruction at a school. What you want to look at is longitudinal data.

Longitudinal data tracks the performance of the same students over time, to measure the value added by different schools and classrooms over the entire schooling of a student. MGP uses longitudinal data.

The National Academy of Sciences argues against using the most prominent source of static test scoresthe federal NAEP testto draw conclusions on the causal effect of school reforms for exactly this reason.

Yet nearly every piece of advocacy research arguing for or against school reform makes the mistake of using static data. That's why they draw different conclusions using the same data, leaving parents confused and frustrated.

Ranking DC schools by MGP scores reveals some surprises

Here's how MGP works. If a school has an MGP of 60, that means that the students in that school scored better than 60% of the students citywide who had similar test scores in previous years.

The top 5 schools by MGP are not in the top 5, or even the top 10, when ranked by percent proficiency.

SchoolMGP RankMGP: MathMGP: Rdg.% Prof Rank% Prof: Math% Prof: ReadingGradesWard
Thurgood Marshall Academy PCS#181.475.1#1981.475.19-128
KIPP PCS: AIM#275.768.4#2485.059.35-88
Hyde-Addison ES#36775.2#1381.283.2PK-52
DC Prep Edgewood MS#476.365.3#1588.871.94-85
KIPP PCS: KEY#571.769.0#2378.466.75-87

Deal Middle School, the 10th ranked school by test scores and top ranked middle school, ranks 38th by MGP. Deal has a math MGP of 59.9 and a reading MGP of 58.5.

Does that mean that KIPP middle schools are better than Deal? Many parents would say no. And MGP doesn't measure everything important about a school on its own. But this reveals how parents often use test scoresas an indicator of the quality of the other students, not the quality of the instruction.

Public Charter School Board closes 2nd best school in DC based on 2012 MGP

Septima Clark charter school had the 7th highest MGP in the city in 2012. The all-boys elementary school in Anacostia had a 2012 Reading MGP of 77, the 2nd highest in the city.

That means that Septima Clark students scored better than 77% of students in the city whose scores in the previous year were the same as the previous year's scores of the Septima Clark's students.

However, the Public Charter School Board (PCSB) ranked Septima Clark as a "mid-performing school" in its Charter Performance Report. The PCSB indicated that it would close the school, which forced Septima Clark to merge with Achievement Prep. One parent told ABC, "Either the board was misinformed, had no idea what was going on or just deliberately did not care".

When asked why they closed one of the top-performing schools in the city, PCSB Executive Director Scott Pearson said "Growth is one of many important indicators of school quality, but we caution the use of it in isolation". Pearson pointed to a high churn rate of students at Septima Clark, which indicates to him that parents are dissatisfied.

Pearson said that the PCSB "weighs growth as a factor, along with proficiency, attendance, re-enrollment, and whether students can read by third grade (a predictor for future successes such as high school graduation and college completion). We were pleased to see Septima Clark PCS had a strong showing in growth last year, but previous year's growth scores were not as strong, and its proficiency is one of the lowest in the city."

MGP scores should be a larger factor in assessing schools

OSSE has provided MGP scores for 2 years. Nonetheless, advocates on all sides of the education reform dialogue continue to use non-longitudinal data to assess the outcomes of school reform initiatives. And DCPS, PCSB and OSSE continue to prominently display percent proficiency scores of each school on their online report cards.

All three agencies should modify their online report cards to prominently display MGP, and explain it in layman's terms. It's not complicated. And people make very important decisions based on these scores.

Journalists should also ask advocacy researchers why they use static data when longitudinal data is available. Just like journalists note the margin of error of studies that they report, they should also note when advocacy research relies on non-longitudinal data.

Wanna see where your local school ranks? Here's the entire list (XLSX) of DC schools, ranked by the average of 2011 and 2012 MGP. How does your child's school or your neighborhood school rank, and what does this tell you?

Support us: Monthly   Yearly   One time
Greatest supporter—$250/year
Greater supporter—$100/year
Great supporter—$50/year
Or pick your own amount: $/year
Greatest supporter—$250
Greater supporter—$100
Great supporter—$50
Supporter—$20
Or pick your own amount: $
Want to contribute by mail or another way? Instructions are here.
Contributions to Greater Greater Education are not tax deductible.

Ken Archer is CTO of a software firm in Tysons Corner. He commutes to Tysons by bus from his home in Georgetown, where he lives with his wife and son. Ken completed a Masters degree in Philosophy from The Catholic University of America. 

Comments

Add a comment »

just now:

@Catania_EdCmte Chair: @dcpublicschools shld be closed not just for low enrollment but low proficiency performance — like charter schools.

http://dccouncil.us/events/education-hearing-on-b20-310-b20-311-b20-328-and-b20-41-public-witnesses

by @shawingtontimes on Jul 9, 2013 10:03 am • linkreport

This is a useful and informative discussion. I was not aware of the MGP scores. Is there a way you can link the final spreadsheet in something other than Google Doc Preview mode? As currently linked it's tiny and unreadable on screen.

by Lane on Jul 9, 2013 10:18 am • linkreport

Lane: I've downloaded the spreadsheet as Excel and switched the link to point to that version, which should be more readable.

by David Alpert on Jul 9, 2013 10:52 am • linkreport

google docs: unreadable in government offices.

by andy on Jul 9, 2013 11:01 am • linkreport

When the page reloaded after my comment, there was a new link! Thank you!

by andy on Jul 9, 2013 11:03 am • linkreport

What really matters is how much a school helps its students advance.

What also really matters is a student's peer group in the classroom. Partly due to the herd mentality of children and partly due to the difficulty of teaching to a classroom of widely ranging skill levels and aptitudes. If a student who is proficient is in a classroom full of people who are not proficient, that student is unlikely to be challenged.

by Falls Church on Jul 9, 2013 11:44 am • linkreport

What really matters is how much a school helps its students advance.

The academic needs of students at below proficiency who need a school to help them become proficient are different than the academic needs of students who are already proficient and will benefit from different types of instruction.

The percent proficiency score may indicate if a school is tilted towards helping poor performing students catch up or nurturing already-proficient students further.

by Tyro on Jul 9, 2013 11:53 am • linkreport

Very good piece, Ken.

Regarding "what's important", @Falls Church and Ken are both right. Just need to specify the perspectives (matters to whom?). I've added some words in brackets to help clarify...

"What really matters [for accountability] is how much a school helps its students advance."

"What also really matters [to each student's own parents] is a student's peer group in the classroom."

In fact, it's possible to construct two different types of school performance indicator: one for accountability purposes that statistically removes peer effects and only picks up school effects ("intrinsic performance") and another that combines the effectiveness of the adults working at the school along with the influences, whether positive or negative, of the peers ("total performance").

Here's a useful article on this from the peer-reviewed Economics of Education Review: http://www.sciencedirect.com/science/article/pii/S0272775796000817

The MGP is a great step in the right direction and as Ken points out, a huge improvement over proficiency rates, which are often misinterpreted. This is because MGPs take into account students' prior achievement. However, we can do even better by comparing students within demographic groups and controlling for factors that are known to predict test scores but are outside the control of teachers in a given year, including English language learner status, disability, and family income. If you control for any of these factors you get a fairer measure of intrinsic teacher or school performance. But the main takeaway is that MGP is better than what we often use now.

by Steven Glazerman on Jul 9, 2013 12:12 pm • linkreport

@Tyro is also correct. Proficiency rates can be useful for answering some questions, just not school accountability ones.

by Steven Glazerman on Jul 9, 2013 12:13 pm • linkreport

Interest piece. improvement for some schools matters, but for high achieving schools, there may be less room for growth. This analysis seems to diminish the accomplishments of schools which have already achieved the higher static metric. If a school is 80-90% proficient, how much improvement would you expect? Wouldn't it be more significant to weight longitudinally the percentage of advanced students in these cases. I really don't need a chart to know that Janney students can read and perform match at grade level. It would be more helpful to see how many move up to advanced proficiencies

by anon_1 on Jul 9, 2013 12:51 pm • linkreport

Steven Glazerman -- I was going to point out that an organization called Mathematica calculates those sorts of numbers for DC Public and Public Charter Schools, but then I recognized the name from previous comments and realized you might be aware of such calculations. :)

by Schools Watch on Jul 9, 2013 1:18 pm • linkreport

@anon_1: I think what you are worried about are ceiling effects: the possibility that it's harder to show growth at the high end because the underlying scale does not adequately discriminate among, say, high and very high achievement.

Fortunately, this is probably not a big concern here because the MGP and value added estimates that Mathematica calculates are based on the continuous measures of achievement, the DC-CAS, which has a finer grain than the coarsened proficiency rates. Proficiency rates collapse groups of DC-CAS scores into just a few bins, and in doing so discard information about how scores vary within those large groupings (including the Advanced Proficient bin).

Yes, @Schools Watch, I should have mentioned that we do the value added calculations for DCPS and OSSE, but my comments are discussing the merits of these measures more generally.

by Steven Glazerman on Jul 9, 2013 1:26 pm • linkreport

My proposed SOP to find your best DCPS for your kids:
1. Determine acceptable commute distance and eliminate non-qualifying schools;
2. Set threshold test scores in math and reading and eliminate non-qualifying schools;
3. Sort based on weighted math or reading MGP into a top 10;
5. Sort your top 10 based on personal preference into top six. If there are any programs of particular interest, e.g., language, move these up the list. If there are any schools on the list with very easy admissions (check last year's waitlist length), move these down the list to take into account DCPS' knockout preference in the lottery.

For PCS: apply to best 12 or so and see whether you get in to any. Good luck with that.

by andy on Jul 9, 2013 1:43 pm • linkreport

google docs: unreadable in government offices.

Better yet, the original version didn't load in Chrome. I had to use IE.

by oboe on Jul 9, 2013 2:21 pm • linkreport

I agree that MGP is a useful metric, and I hadn't known about it before. But of course there may be other important factors that neither a static nor a longitudinal score will tell you much about. It's interesting that Septima Clark (the charter school that the PCSB recently closed) had such a high MGP, but I heard (from someone with first-hand knowledge whose judgment I respect) that the environment there was "toxic." Unfortunately, there's no score available to reflect something like that.

by Natalie on Jul 9, 2013 4:28 pm • linkreport

MGP for a given year reflects the new in-year cohort for a given school grade, correct? If so, gentrification could lead to some strange shifts - if a new cohort comes in that is dissimilar from the previous year's students, their MGP could drop because they did better the previous year.

Is that basically correct?

by andy on Jul 9, 2013 4:32 pm • linkreport

@Steven Glazerman

"Fortunately, this is probably not a big concern here because the MGP and value added estimates that Mathematica calculates are based on the continuous measures of achievement, the DC-CAS, which has a finer grain than the coarsened proficiency rates."

Still, if high DCCAS schools don't show high MGP it suggests that MGP can't differentiate between them and is not useful for high achieving schools regardless of the finer grain.

Also is there MGP related data that shows how much the students' scores rise year to year? This data could help with ceiling effect concerns. For instance, if KIPP/AIM's 76 MGP in math represents a 12 point rise with their score cohort having a 3 point rise. Janney may have a ~60 MGP math with a 3 point and their score cohort averaging 1 point rise but Janney will never get a 12 point rise above their math averages due to the ceiling of the DCCAS and limits of student interest in giving the extra effort to move from a 90 to 100% score versus the effort to move from 70% to 80%.

by leeindc on Jul 9, 2013 10:17 pm • linkreport

I'm sure that the researchers have thought of this, but what happens in the scenario where teaching quality is equally good at all schools? It seems like you'd nevertheless end up with a different MGP for each school.

by Tom Veil on Jul 9, 2013 10:47 pm • linkreport

Mr Archer, not sure exactly what you mean by "median growth percentile." The link under the term in the 2nd paragraph, which normally would go to a formal definition, goes to the google docs image of the spreadsheet of the data.

Further below you write "Here's how MGP works. If a school has an MGP of 60, that means that the students in that school scored better than 60% of the students citywide who had similar test scores in previous years."

This is vague. It does not tell me if the score follows the progress of individual students, which as a parent is what concerns me most. Note that @Andy's 4:32 comment is asking the same thing.

Can you please quote or cite the formal, mathematical definition of the metric?

by goldfish on Jul 10, 2013 8:50 am • linkreport

goldfish: Maybe this is what you are looking for?

http://pcsb-pmf.wikispaces.com/Growth+Model+FAQ

A student growth percentile (abbreviated SGP) measures how much a student's performance has improved from one year to the next relative to his or her academic peers: other students statewide with similar DC CAS test scores in prior years.

The calculation answers the question, "Among other students with similar DC CAS test score histories in previous years, what is the range of scores attained this year?" The model then uses the answer to determine whether a student grew at a faster or slower rate than the students' peers, or at a similar rate. ...

The median growth percentile summarizes student growth rates by school, grade level, or other group of interest.

The median student growth percentile is the midpoint of student growth percentiles in the school. Half of the students had student growth percentiles higher than the median; half had lower.

I've put this link in where the original article introduces MGP instead of just a link to the spreadsheet. Ken, let me know if you want a different one.

by David Alpert on Jul 10, 2013 9:01 am • linkreport

@David Alpert, thanks.

So it is an attempt to measure the growth of the distribution of test scores of the school, from one year to the next.

Couple of thoughts. I presume that kids with low scores will (or should) improve a lot, and vice-versa kids with high test scores. So this measure favors schools with low-performing kids. As such, it is a useful tool to discriminate the schools that are improving the most with low scores, but not really very helpful for schools that have high test scores.

It is an index of improvement.

BTW, when a parent considers a school, every one I know eyeballs how much the scores have improved from one year to the next. Is MGP any better than that?

by goldfish on Jul 10, 2013 9:14 am • linkreport

One more thought: since MGP is normalized, it is important to know its distribution, to judge how significant slight differences between schools. What are the error bars?

by goldfish on Jul 10, 2013 9:19 am • linkreport

Couple of thoughts. I presume that kids with low scores will (or should) improve a lot, and vice-versa kids with high test scores.

Why would you assume this? Kids who have low scores likely have them because they do not improve much each year.

It is an index of improvement.

Yes, and since we assume that there are schools that are "good" and "bad" at what they do, isn't measuring how much they improve their students' scores important?

by MLD on Jul 10, 2013 9:48 am • linkreport

Why would you assume this?

Because overall test scores in DC are increasing. The kids already doing well can only improve so much -- a kid getting 100% can't improve at all.

by goldfish on Jul 10, 2013 10:10 am • linkreport

I think MGP would be a good measure but for a few problems.

First the DC-CAS is suspect due to cheating. How widespread this is, or has been is hard to know because OSSE works for the Mayor and has a conflict of interest in exposing the depth and breadth of cheating.

Second, the DC-CAS changes from year to year. Vendors and standards change constantly, and the actual content of the DC-CAS is confidential so we don't have any independent check on the quality and nature of the test. Moving from the Massachusetts standard to common core, may have been good, but the Massachusetts standard was demonstrably higher so this creates a false increase.

Charters have a lot more flexibility to expel students. To what degree is their rating both on proficiency based on removal of underperforming and disruptive students, instead of a better educational model. Does this track individual students, or cohorts?

To what degree is good performance on the DC-CAS due to narrowing the curriculum and removing a healthy broad curriculum to win the DC-CAS game?

I always point to the advanced number and basic number as other measures. There are a number of schools with high proficiency but low advanced scores which shows the schools basically warehouse high achievers when they are confident they will beat the proficient number. And high below basic relative to basic means a school is focusing on getting kids from basic to proficient and to some degree giving up on kids they see as having no hope to cross the proficiency line. MGP, properly implemented could help identify this.

One way to improve all the testing is to have a DC-CAS in the fall, right at the start of school and again late in the year. This will help to identify both cheating and summer loss. If you only look at how well a student was doing the previous spring you can't account for differentials in summer loss. Some studies have shown that almost the entire differential between students in poor or affluent situations is summer loss.

To deal with the advantage that charters get by expelling students we could develop some kind of "relief pitcher rule" so that schools that expel keep responsibility for poor performance of a student, unless the receiving school can make it up. This doesn't of course deal with the issue that any school who can expel disruptive students is at a huge competitive advantage over a school that cannot expel them, and even has to accept the disruptive students from other schools which is a factor we don't have data to understand, but is a large burden for DCPS schools.

by Mary Melchior on Jul 10, 2013 11:01 am • linkreport

@Mary Melchior: I think it is a mistake to conflate issues with MGP with the difference in expulsion policies between charters and DCPS.

The expulsion issue is real but far too complicated than what can be done on a spreadsheet. For example, DCPS really does not expel; it just puts kids in extended suspensions, 'encouraging' them drop out on their own accord. No chance MGP can be tweaked to account for these kinds of problems.

by goldfish on Jul 10, 2013 11:12 am • linkreport

There is also a problem with growth related to basic proficiency. I remember when my kids were at Langdon the teachers there were very upset that at another school got huge bonuses for increasing scores dramatically under Rhee, while they had their kids at twice the proficiency, and had increases, but not as high. Especially if you have kids who are already advanced, they test may not even cover material the kids have learned. These are not dynamic tests that keep giving kids harder content if they get questions right. If a kid gets a 100% one year, and 100% the next that shows as no gain, except there may have been remarkable gain.

by Mary Melchior on Jul 10, 2013 11:22 am • linkreport

@Mary Melchior: thanks for pointing out that DC CAS changes from one year to the next.

But by normalizing MGP by how much the average score improves as a function of test score, this accounts for the changes in the test from one year to the next, no?

The scatter is where the problems lie. Improvement at the highest performing schools will be quite small, divided by a similarly small number, increasing the error.

by goldfish on Jul 10, 2013 11:30 am • linkreport

Agreed! DCPS does post growth on their website, but it's hidden, but OSSE can also do a better job of presenting/communicating their data. Working with organizations such as Code for DC to put the data in a central place (such as opendatadc.org) is a nice step forward.

Also, note that some of the best schools by MGP (Community Lamond, Hendley) were flagged for cheating by OSSE recently.

by Graham on Jul 10, 2013 11:30 am • linkreport

Charters have a lot more flexibility to expel students. To what degree is their rating both on proficiency based on removal of underperforming and disruptive students, instead of a better educational model.

Many are raising this objection, that creamskimming by some schools casts doubt on validity of longitudinal data.

The answer is that creamskimming, like cheating, undermines any test score data used for school accountability. If creamskimming and cheating didn't happen, then longitudinal data would be the only reliable basis for assessing school quality.

That's why I don't understand why those most in favor of score-based accountability aren't in the forefront of the battle against cheating and creamskimming.

We have to tackle creamskimming. I believe that the only solution to the problem of transfers out of high-performing schools is for OSSE to add to its annual mobility study an anonymous audit of a sample of such transfers, particularly when the student is transferring to a low-performing school. OSSE can do such audits (they do an audit of families whose kids are not enrolled in pre-K, for example) but they obviously need greater political independence to be able to do something like this. That's why it's critical - CRITICAL - that we push the Council to pass the provision of Catania's Governance Act that makes that State Superintendent dismissable only for cause.

by Ken Archer on Jul 10, 2013 11:42 am • linkreport

note that some of the best schools by MGP (Community Lamond, Hendley) were flagged for cheating by OSSE recently

MGP is actually used by OSSE to flag possible classrooms where cheating exists for further investigation.

In fact, using longitudinal data shows that the damage done by cheating extends for years into the future - because longitudinal data relies on scores over multiple years. That's why OSSE should be given the political independence from the Mayor that CM Catania is pushing for (making State Superintendent dismissable only for cause) so that they can really investigate cheating.

by Ken Archer on Jul 10, 2013 11:44 am • linkreport

Everybody here cares about their children, but how can a discussion based on quantitative measures give you peace of mind about a school? Sometimes when I am making big life decisions, I start making lists of the pros and cons for each possibility, but that ends up being more of a distraction than anything else. In the end, first-hand experience has always been more of a deciding factor.

We could have other conversations about education. For example - how do various aspects of schooling today prepare children for adulthood and citizenship? What do we hope/expect to see in our children as they develop into adults?

by Montessori Teacher on Jul 10, 2013 11:44 am • linkreport

We could have other conversations about education. For example - how do various aspects of schooling today prepare children for adulthood and citizenship?

Presumably as a side effect of preparing students for adulthood and citizenship (by teaching them skills like self-discipline, time management, an ability and desire to learn and stay informed) such teaching will display itself in the form of quantitative scores (higher reading proficiency, better test scores as a result of studying).

by JustMe on Jul 10, 2013 1:09 pm • linkreport

Hi JustMe,

I agree that in most cases test scores would reflect these traits, but what in the school environment (and the larger community while I'm at it!) actually fosters the traits to begin with?

I say in most cases because sometimes the tests are poorly written and sometimes children can test well who have none of the traits you list.

by Montessori Teacher on Jul 10, 2013 1:42 pm • linkreport

Reposting:
Ken,

These data are seriously misleading and should not be used without additional information. Some of this I wrote about in the May 29th Current Viewpoint. I raised the issue again today before the Council Education Committee.
1) The data do not report transfers. You show that 76 students for Thurgood Marshall in math in 2012, which would be grade 10. That TM cohort numbered 167 students in gr9 in Oct 2010. In Oct 2011, the cohort dropped to 100. Six months later, in April 2012, 91 were tested. Data from OSSE, enrollment audits and NCLB (DC CAS) website.
I raised this issue several times before the Council Ed Comm, but neither Chm Catania nor CM Grosso will challenge charter reps or advocates on this issue.

Take Achievement Prep, for which the chancellor has closed down Malcolm X ES and excessed its teachers:
Oct 2009, Gr6: 21 students
Oct 2010, Gr7: 17 students
Oct 2011, Gr8: 11 students
- -
Oct 2010, Gr6: 44 students
Apr 2011, Gr6: 34 students tested
Oct 2011, Gr7: 26 students
Apr 2012, Gr7: 23 students tested
Oct 2012, Gr8: 18 students
= = =
The same is true for most of the charter high schools.
Chavez Capitol Hill:
Oct 2010, Gr9: 184
Oct 2011, Gr10: 133
Apr 2012, Gr10: 125 tested
= = = = = =
DC Prep - Edgewood MS
Oct 2011, Gr6: 61 stdts
Oct 2012, Gr7: 43 stdts
= = = = = =
KIPP Key & AIM don't show such dramatic drops, but probably because OSSE's "slice in time" data don't show transfers and replacements. The 2013 Mathematica study of KIPP schools reported that KIPP school replace transfers with students whose test scores are higher than those who start at KIPP.
= = = =
For data that are valid, i.e. show actual change in tested learning mastery, one has to track the same students excluding those who transferred out and those transferred in.
= = =
We still do NOT know whether a charter school's average scores, whether displayed as static scores or gains, are the result of the school's collective instructional superiority or simply due to selective enrollment, i.e. attract a large number, then transfer out the academically and behaviorally weak students.
Chairman Catania can get the data, as can charter exec Scott Pearson and the DC OSSE staff. But they refuse.
Shouldn't we know how the privilege of transferring students from charters to DCPS impacts comparative scores and graduation rates?
Erich

by Erich Martel on Jul 10, 2013 1:53 pm • linkreport

Ken,
You wrote:
[[We have to tackle creamskimming. I believe that the only solution to the problem of transfers out of high-performing schools is for OSSE to add to its annual mobility study an anonymous audit of a sample of such transfers, particularly when the student is transferring to a low-performing school.]]

The enrollment data suggest that it is the unilateral, one-way charter to DCPS transfer privilege that CREATES what his being called a "high performing school." That's why we need to know the scores of all students before they entered the charter school and their scores after they transferred out or were expelled.
My guess is that the departing students had a larger effect on the charter school they left (upward effect) than it did on the DCPS school they transferred into (where they were a smaller portion of the DCPS school's enrollment.
Council members, mayor and charter board members need to be confronted with the data and asked when they will provide the public with untainted data.
Erich
Erich

by Erich Martel on Jul 10, 2013 2:05 pm • linkreport

sometimes children can test well who have none of the traits you list.

There may be some isolated cases of children like that, but overall, if the average scores are high, the preparation provided by the schools will be good, and if the average scores are low, the values of adulthood and skills necessary for good citizenship will be lacking.

by JustMe on Jul 10, 2013 2:39 pm • linkreport

@Erich Martel

I agree with you on the need for data on OSSE to better understand the "creamskimming". I would argue for a few pieces of data in addition to the "relief pitcher" data I suggested above.

What percentage of reported incidents at DCPS schools involved students transferred out of charters? Which charters were the source of how many incidents, suspensions, and days suspended in DCPS schools? We probably want to ask how many years since transfer we'd want to track this, but looking at these kids for a few years would give us an idea how much DCPS is burdened by that problem.

Survey parents who "voluntarily transferred" out of charters to find out why they left, both during the school year and between school years. I think one of the great things about Emma Brown's article on kids being kicked out of charters was the young woman who was given the option to withdraw rather than be expelled so it didn't show up on her transcript. She obviously didn't show up on the charter's expulsion numbers even though she was expelled, except without the "record" of expulsion. We need to have an idea how much of that is going on.

by Mary M. Melchior on Jul 11, 2013 10:50 am • linkreport

Here is any interesting related article on the washington post web site.

http://www.washingtonpost.com/blogs/answer-sheet/wp/2013/07/10/standardized-testing-scores-the-public-never-sees/

by Mary M. Melchior on Jul 11, 2013 12:43 pm • linkreport

On a related note, work by Jon Friedman from the Kennedy School
http://journalistsresource.org/studies/society/education/teacher-value-added-student-outcomes-adulthood?wpmp_switcher=mobile
greatly validate the value added method as a measure of teacher quality by showing that:

1) teacher quality measured by the value added has a causal effect on long term outcomes (not just short term outcomes) such as college, earnings, teenage pregnancy;

and

2) there does not seem to be very heterogenous effect across race, family income, etc of the students,..
suggesting that it is a good assessment method for a wide range of students/school. [starting from a low test score might favor low-performing schools while other factors might hinder learning among low performing students, these effects seem to balance out ]

So I agree that schools should always publish Value Added measures rather than just test scores.

But in addition, given how small DC is and the choice set that parents have around here, I'd love to see these measures reported not just for DC schools but for the Washington area at large (Northern Virginia, Montgomery,...). After all, DC schools should be directly competing with schools in Arlington and Bethesda.

by Garance on Jul 17, 2013 4:16 am • linkreport

@Garance That's a good idea, but currently DC, MD, and VA all use different assessments. If they all adopt Common Core and start using common core assessments, then we can do the analysis you suggested. Would also be nice to track students and teachers across jurisdictions (in this case across state lines) using a common ID code, so when a student moves we don't lose the pre-test. But that may be too much to ask for.

by Steven Glazerman on Jul 26, 2013 10:00 am • linkreport

Add a Comment

Name: (will be displayed on the comments page)

Email: (must be your real address, but will be kept private)

URL: (optional, will be displayed)

Your comment:

By submitting a comment, you agree to abide by our comment policy.
Notify me of followup comments via email. (You can also subscribe without commenting.)
Save my name and email address on this computer so I don't have to enter it next time, and so I don't have to answer the anti-spam map challenge question in the future.

or