The Honesty Gap; The Illusion of Measuring Proficiency

posted in: Reform | 0

One of the great gifts of standardized testing was the premise of giving the same questions to many students would allow for a more reliable measure of student achievement. This reliability is gained through standardization of the test, while validity is gained by the relevance of the questions to the stated purpose of the test. In the 1920s, college admission officers leaped on this bandwagon in order to compare students from New Hampshire and Ohio, and thus was born our accountability system tied to test scores.

However, there are some problems with this logic. What if the students had not been prepared in the same way? What if there were cultural reasons why answers might differ across state lines? What if smart teachers or test-coaching companies could study the test and provide useful insight? And what if test companies manipulated the pass/fail line, commonly called the cut score, for political reasons? In a recent article in Education Next, Michael J. Petrilli discusses the illusion of proficiency and the resulting gap in honesty:

In 2007, the Thomas B. Fordham Institute published what was probably the most influential study in our eighteen-year history: The Proficiency Illusion. Using data from state tests and NWEA’s Measures of Academic Progress, our partners at NWEA estimated the “proficiency cut scores” of most of the states in the country. We expected to find a race to the bottom during the No Child Left Behind era; instead we found a walk to the middle. Importantly, though, we also demonstrated the vast discrepancies from state to state—and within states, from subject to subject and even grade to grade—when it came to what counted as “proficient.”  Checker and I wrote in the foreword:

What does it mean for standards-based reform in general and NCLB in particular? It means big trouble—and those who care about strengthening U.S. K–12 education should be furious. There’s all this testing—too much, surely—yet the testing enterprise is unbelievably slipshod. It’s not just that results vary, but that they vary almost randomly, erratically, from place to place and grade to grade and year to year in ways that have little or nothing to do with true differences in pupil achievement. America is awash in achievement “data,” yet the truth about our educational performance is far from transparent and trustworthy. It may be smoke and mirrors. Gains (and slippages) may be illusory. Comparisons may be misleading. Apparent problems may be nonexistent or, at least, misstated. The testing infrastructure on which so many school reform efforts rest, and in which so much confidence has been vested, is unreliable—at best. We believe in results-based, test-measured, standards-aligned accountability systems. They’re the core of NCLB, not to mention earlier (and concurrent) systems devised by individual states. But it turns out that there’s far less to trust here than we, and you, and lawmakers have assumed.

So it would seem that as the accountability for student performance shifts away form students, uneasily resting on the shoulders of administrators, now teetering on top of unhappy teachers, the very way in which we have chosen to measure that accountability is quite faulty. This is not news to testing insiders but it seems to be gaining in testing outsiders: the number of students who opted out of testing last year in New York State was 50,000 while the number this year will top 200,000.

There are a number of things we can do about this. Step one is to re-organize the definition of what it means to be accountable in education and to whom we should be accountable. Step two is to re-organize how we test students and use those scores properly, and not for high-stakes decisions. Step three would be to develop new models of schooling, inside the old models, that bring out and document the real potential of children to improve themselves and our world through formative assessment strategies, performance assessment, and portfolio assessment. Formative assessment tells students and teachers how to correct and increase learning along the way in a class or a course. Performance assessment asks students to demonstrate their learning, and portfolio assessment is a set of collection strategies that fully documents what students know and can do.

New Models for Student Centered Learning

Stanford Center for Opportunity and Policy in Education has recently released an interesting – especially in this inquiry-model world-  report  titled Centered on Results: Assessing the impact of student centered learning.

From page 4:

“What is Student-Centered Learning?

Student-centered learning does not represent a single curriculum, model, or practice. Rather, it draws on a variety of concepts in education, the brain sciences, and the child and youth development fields, comprising those instructional practices that engage individuals in learning deeply and reaching their highest potential. Nellie Mae has identified four tenets of student-centered learning:

  1. Learning is personalized: Personalized learning recognizes that students engage in different ways and in different places. Students benefit from individually-paced, targeted learning tasks that start from where the student is, formatively assess existing skills and knowledge, and address the student’s needs and interests.
  2. Learning is competency-based: Students move ahead when they have demonstrated mastery of content, not when they’ve reached a certain birthday or endured the required hours in a classroom.
  3. Learning happens anytime, anywhere: Learning takes place beyond the traditional school day, and even the school year. The school’s walls are permeable – learning is not restricted to the classroom.
  4. Students take ownership over their learning: Student-centered learning engages students in their own success – and incorporates their interests and skills into the learning process. Students support each other’s progress and celebrate success.

 Page 12 discussed themes for moving forward:

  1. Teachers who implemented a higher degree of student-centered practices had larger gains in student outcomes.
  2. School culture matters.
  3. Teachers need support.
  4. We need clearer definitions and examples of student-centered practice across disciplines.

Check out the report for information about this study methodology and results.

In the meantime, here is what Michael suggests:

Allow me to summarize:

1. It’s critically important that states tell parents, teachers, and kids the truth about whether individual students are on track for college or career. By moving to tougher, Common Core- aligned assessments with much higher cut scores, states can finally close the honesty gap and make good on this commitment.

2. It’s not practical to link high school graduation to college readiness—unless we want to deny diplomas to a majority of the nation’s twelfth graders. Colleges, on the other hand, should stop admitting students who are well below the college-ready level.

3. When it comes to measuring the effectiveness of schools—the true purpose of state accountability systems—fairness demands that we control for the performance of students on the front end. Thus, rating systems should be based on individual student growth over time.