The Honesty Gap; The Illusion of Measuring Proficiency

posted in: Reform | 0

One of the great gifts of standardized testing was the premise of giving the same questions to many students would allow for a more reliable measure of student achievement. This reliability is gained through standardization of the test, while validity is gained by the relevance of the questions to the stated purpose of the test. In the 1920s, college admission officers leaped on this bandwagon in order to compare students from New Hampshire and Ohio, and thus was born our accountability system tied to test scores.

However, there are some problems with this logic. What if the students had not been prepared in the same way? What if there were cultural reasons why answers might differ across state lines? What if smart teachers or test-coaching companies could study the test and provide useful insight? And what if test companies manipulated the pass/fail line, commonly called the cut score, for political reasons? In a recent article in Education Next, Michael J. Petrilli discusses the illusion of proficiency and the resulting gap in honesty:

Formative Assessment Through Micro-Data Collection

posted in: Assessment, Reform | 0

In the struggle over what kinds of data matter, not just big and little data, but real data that deserves our attention in this busy world we live in, data that move, improve and educate teachers is my favorite. In NYC there is a school called School for Global Leaders that is using micro-data that might be described as very small and almost not worth collecting. And yet this micro-data, for example, where students sit, how much time students spend in certain instructional groups or how much learning is attained in lecture formats, is the best type of assessment data to collect: