Word Attack: “Data”
Like many people interested in education, I’ve been thinking a lot about assessment and data. Yesterday, I casually polled my teacher friends on Facebook. I asked, “What’s the first thing that crosses your mind when you hear the word ‘data’?” Though hardly scientific, I found the handful of responses I received quite interesting. Ultimately, two themes emerged that illustrate why the d-word is both incredibly important, and incredibly problematic.
Two of the responses I received talked about what I think of as “small-d data”, or the word that just simply means “information.” One friend who works in private schools said, “I teach my first graders probability each June and they love to talk about collecting data, which I introduce in that unit.” Another who majored in Special Education in college wrote that she “[thinks] of IEPs and how an educator will evaluate the process of [teaching] his or her students, and whether or not the goals of the IEP are being met.” Used this way, data is a useful, “objective” way to measure and understand observable trends and phenomena in the world. It’s this kind of data that helps us to make informed decisions by balancing our impressions and opinions, which are necessarily biased, with so-called hard evidence– things you can count, directly measure, or calculate.
When people praise the use of data in education, it’s typically this use that they have in mind. As teachers, we want to know as much as we can about our students, so we can strategically target our instruction to meet their specific needs. Administrators–theoretically, at least– use data similarly, to support teachers in their professional development and everyday practice. Using data from valid assessments in classrooms and school buildings supports the learning process, because it gives timely and useful information to those who can use it to improve learning opportunities.
The other responses I received were “standardized tests… ugh!”; “It seems to be all that higher level admins care about” and “America’s children are more than the federal government’s data.” These responses correspond to Data, which is something entirely different from the modest and helpful information discussed above. Rather than supplying relatively objective measurements of student learning, Data is typically gathered under pressure for educational outsiders, and often from assessments that have been used inappropriately, or are of questionable value.
“Data” is one of, if not the, sexiest words in the realm of current educational jargon. If you want to curry favor with foundations, elected officials, or other self-styled school reformers, slipping Data into your proposal or conversation as often as possible is the quickest way to do it. Data is sexy because it still sounds like the word it replaced, and therefore has the ring of objectivity, truth, and fairness that people are looking for when they seek quick fixes for problems in education. If you claim to have Data on your side, you gain the upper hand in many reform conversations. That’s because untrained ears often fail to realize that Data is less informative or objective than it sounds.
In addition to the well-known problems that afflict standardized tests (the metrics from which most Data come), there are other problems associated with runaway uses of Data and assessment. To satisfy demands for performance Data, school districts and governments are using pre-packaged tests inappropriately. Even test-makers agree that it is inappropriate to make high-stakes decisions on the basis of one test, however valid it might be. But as we speak, schools are being targeted for harsh interventions on that very basis. Often, public officials then use the Data as a shield to deflect the criticism they face from the communities they’ve disrupted and humiliated, as though the numbers suggested that the harsh action be taken, not the people who took that action.
Additionally, tests that were designed to measure what students know are now being proposed as a measure of teacher effectiveness, a purpose for which they were never intended. It is impossible for a test to identify who is responsible for what knowledge a student may have. For instance, my mother taught me to read long before I started school. As a result, I was often several years ahead of my classmates in reading. Under this new “measurable effectiveness” paradigm, my first grade teacher, who allowed me to read and play on my own while she taught other students phonics, would have been given credit for what was ultimately my mother’s and my achievement. Even so-called “value-added” assessments, which look at what gains a student makes over the course of a year, cannot really demonstrate whether a teacher was “effective” or not. Even if one assumes that children can only learn what adults “put” in their heads (banking, anyone??), because students don’t spend 100% of their time with just one teacher, it’s still difficult– if not impossible– to discern who is responsible for what knowledge. Was it the classroom teacher? The librarian who reads with children after school? The math tutor? Sesame Street?
While the data may show what a student has done on a particular test on a particular day, they do not show what his or her teacher has done over the many days spent in school. Yet, to those who prize Data above all else, linking a teacher’s future to a number that could possibly, maybe be associated with said teacher is an acceptable thing to do. After all, it sounds objective, and it reduces the observer’s need to know anything about teaching or learning (which is appealing to the increasing number of non-educators who are leading schools and school districts).
Those problems are made even worse when the tests themselves are bad. I am but one of many teachers (and parents) who have fought battles over DIBELS, a test that has been widely adopted despite controversies over whether or not it measures what it says it does and possible conflicts of interest among those who pushed for its adoption. When I questioned why I should give this test instead of alternatives that produce useful data, I was told that we were doing it to generate Data for central office administrators. Useless though it might have been, DIBELS can be used to make graphs, which are friendly to outsiders who don’t understand reading development. (For the record, those are the paraphrased words of my former principal. This stuff actually happens, folks; I have the e-mail to prove it.) The administrators being shown those graphs would have been led to believe that the Data showed improvements in reading, when all the data actually showed was how many written words students could accurately identify in a minute. Those are two very different things.
I could go on forever with this stuff. But, to wrap things up and keep it useful, know that there is a difference between data (useful, relatively objective information) and Data (biased, distorted, or otherwise questionable information used to create the appearance of objectivity when making or influencing subjective decisions). Here are a few questions anyone should ask when trying to tell the difference between data and Data in education:
- Where does this data come from? What test or other measure was used to generate it?
- Is that a reliable and/or valid measure of [whatever it purports to measure]?
- Can I see the source of that information?
- Can you explain what this test does/what this data really means? (**Red flag alert! The number one way to tell that you’re dealing with Data instead of useful information is if the person presenting it can’t explain the nuts and bolts of how the measure really works. If you’re having trouble deciding if there’s a discrepancy there– say, if you’ve got a smooth-talking presenter or someone else who’s using the ‘right’ terms, but they sound a bit hollow to you– take the test yourself and decide if you think it measures what it says it does.)
ETA: Also, check out AERA’s statement on high-stakes testing and the responsible use of assessments!