Data, data, and more data—what’s an educator to do? by Paul Goren

Creative Commons image by Flickr user sonrisa electrica

This month, AJE released a special issue on the Practice of Data Use, guest edited by Cynthia Coburn and Andrea Bueschel. In the issue, senior advisor to the University of Chicago Consortium on School Research and the Chicago Public Schools, Paul Goren, contributed his thoughts in a piece entitled Data, Data, and More Data—What’s an Educator to Do? The full text is offered below, and may be downloaded freely here: AJE’s February 2012 Issue.

If you visit a district central office or a state department of education or a principal’s office these days, you will hear the current rhetoric about data use for school improvement. Since the passage of No Child Left Behind, data on school performance, disaggregated by racial/ethnic groups, special education and language status, and gender, are widely available, open to public consumption, and intended to lead to improvement. The disaggregation of these performance data is significant: pushing schools, school practitioners, and education policy makers to understand the performance of all students and not the average performance of students at any given school. Yet the ubiquitous nature of data now available in the public domain runs the risk of every other education fad that has preceded it: significant rhetoric that yields false promises about improving schools and the life chances of young people.

Data-driven decision making. Performance management metrics. School indicator and warning systems. School climate measures. Formative assessments. Summative assessments. Administrative data. Graduation rates. Attendance patterns. Dropout metrics. Test scores. Value-added assessments. High-stakes evidence-driven reform. The implicit and explicit assumption is that if these data exist, improvement will soon be evident. It reminds me of the old quip about the American who goes to France and speaks English louder. Here are the data. … Improve.

The articles in this issue call for a deeper and better understanding of data, their use, the conditions that are most conducive for using data well, how individuals and groups of practitioners make sense of the data before them, and the intended and unintended consequences of data use for school improvement. The authors together craft important messages about what type of research must be done to address these concerns. But perhaps even more important, the authors offer a clarion call to education policy makers and school practitioners that school improvement leading to better outcomes for all children will require more than delivering data to the schoolhouse door.

A major theme across all four articles is that our understanding of how data lead to improvement in education is tremendously underdeveloped. There are numerous aspects of data, how they are interpreted in context, how they are used, what happens when they are used, and how to improve both the data and their use that we do not know. This by no means should suggest that when information is disaggregated by race, gender, or poverty status, we have to wait years for the research studies before we act. Yet it does mean that our assumption that data inevitably lead to improvement is less certain than we think.

To understand how practitioners interpret and use data, it is necessary to understand the context within which this data use unfolds, including classroom, school, and broader policy settings. Judith Warren Little argues that what teachers really do with data remains “opaque” and that without understanding the processes they use to understand and use data, we will remain in the dark about data use. She calls for work that can capture the context, content, practice, and relationships that are present when people interact with data, emphasizing how both the school and policy environment influence such use. One of the key contexts is teacher meetings or grade-level groups—what James Spillane calls data use routines. But Little and other authors also provide convincing evidence that school, district, and even broader institutional contexts shape what data people use, what they notice about those data, and how they make meaning of them. All these things, in turn, influence how teachers and others respond in the classroom.

Little also highlights the role of content knowledge. Reviewing a study by Timperley (2008), she argues that the meaning that teachers make of data and, especially, the implications that they draw for instructional change are influenced by teacher knowledge. All too often analysts draw parallels to the medical profession when thinking about data use in education. Without getting into a critique of modern medicine, think about how a doctor uses patient data to make quick decisions and how much of that decision-making process depends on a doctor’s expertise, content knowledge, and craft knowledge accumulated from practice and experience. Expert teachers, principals, school administrators, and education policy makers do the same, depending on expertise, content and craft knowledge, and their own experience. This suggests that data unto themselves, in the absence of knowledge on the part of the user, will not lead to improvement.

These articles also suggest that data use and its interpretation occur at multiple levels of the education system. In particular, Meredith Honig and Nitya Venkateswaran argue that school practitioners work with data and evidence within a context in which other related actors, especially at the central offices of school systems, are also interacting and interpreting the same data as part of their job responsibilities. Thus, data use is a systems problem rather than just a school activity. Honig and Venkateswaran characterize educators and central office officials engaged in data and evidence use as “practitioners juggling multiple forms of evidence simultaneously” (201). Teachers and school leaders are dependent on central office interpretations while central office actors are simultaneously dependent on how schools use and understand such data. This, then, raises the profile of individuals who can broker information between these different levels of the system. Making sense of such information at the multiple levels of the education system is commonplace yet obviously adds to the complexity of data use for improvement.

Viewing data use as a systems problem highlights the challenges facing the current data use frenzy in education nationwide. Data mean different things to different people in different settings. Schools practitioners can learn from central office administrators and central office staff can learn from school practitioners. And indeed, what is necessary at all levels is the presence of individuals with the capacity to interpret, understand, and broker the information for appropriate use.

Spillane’s article highlights another key facet of data use: the form and function of data reports. How data and evidence are delivered to people in schools influences what aspects of data practitioners notice and attend to. While Spillane emphasizes that state and district policy makers decide about how to package and deliver achievement test data to schools, I would argue that the test companies play an important role here as well. These data may or may not relate to actual classroom or school practices, yet they influence these practices nonetheless.

Ultimately, the strength of Spillane’s article is his emphasis on the embedded contexts of data use. These contexts start at the most proximal level, with organizational routines that play a key role in who is at the table and how they are interacting around data. But it also involves the larger policy context, which provides logics of action that become linked to classroom practice via the data use routines. Most importantly, this article challenges us to think about how data use practices, in the multiple contexts he explores, are maintained and ultimately institutionalized—a key point to ensure that using evidence is not just another passing fad in education.

Data use also occurs within the context of organizational routines followed by practitioners across the education system. Spillane emphasizes this point in his article “Data in Practice: Conceptualizing the Data Based Decision-Making Phenomena.” He underscores what the other authors stress: that the research on data use is painfully underconceptualized. Spillane provides different theoretical frameworks through which to consider data use and to understand data use in actual practice.

Finally, these articles suggest that we take a closer look at what data are actually measuring and why. Jeannette Colyvas analyzes performance metrics in higher education. While these metrics—rankings, benchmark test scores, and other performance measures—are intended to help practitioners and the public have a more transparent understanding of expected performance outcomes, Colyvas argues that they do not always work in that way. First, she argues that once performance measurement systems and measures are introduced publicly, it is difficult to undo or remove them. In a sense they get a life of their own whether or not they are appropriate measures. Ask any higher education administrator what they think of the U.S. News and World Report rankings. More than likely, they will offer numerous and quite substantive criticisms while also noting the importance of attending to these measures. The same is true for K–12 standardized achievement tests and metrics such as annual yearly progress required by No Child Left Behind. Numerous critics can explain why the tests do not test what teachers are teaching and students are learning, yet they remain a vital component of the current narrative on school improvement and high-stakes accountability. They have essentially become a taken-for-granted part of public schooling.

Second, Colyvas notes that schools and universities tend to add on performance metrics rather than doing what might be the more difficult work of creating appropriate measures that actually measure desired outcomes. The pile of metrics gets higher and higher but not necessarily more useful and effective. If these metrics are ultimately going to be effective, Colyvas argues that they must be accurate, easy to work with, applicable in a variety of situations, transmissible and transparent, and constructed in a way in which they can be improved over time. Underlying her commentary is the importance of paying attention to the intended and unintended consequences of using such metrics, or any data or evidence, for education improvement.

What is an educator or policy maker to do in the world of data-driven, evidence-based decision making? Given Colyvas’s warning that there are limited or no opportunities for turning back once metrics are public, the four articles provide data consumers several appropriate cautions. Data do not, by themselves, lead to improvement. The context, the setting, and the environment in which data are delivered all matter. Interpreting evidence is not a solo act, and indeed meaning comes from how a variety of individuals at different levels of the education system understand and make sense of data. While educators continue to get increasing amounts of information to improve their practices, attention must be paid to whether or not the data can do what they are intended to do. The assumption that all data can be simplified into usable knowledge to change practice runs right up against the capacities of the teachers, principals, administrators, and education leaders to truly understand the nature and content of their specific practices, to understand the actual evidence provided, and to understand the data in the context of their practice. Back to speaking English louder: it is essential for all educational practitioners at every level of the system to build multiple fluencies regarding data and evidence use to truly have the capacity to use data to improve practice. While researchers continue to explore and understand data use in multiple educational contexts during this era of data-driven decision making, practitioners and policy makers will need to interact with the data and evidence before them with a healthy sense of constructive skepticism.

Reference

Timperley, Helen. 2008. “Evidence-Informed Conversations Making a Difference to Student Achievement.” In Professional Learning Conversations: Challenges in Using Evidence for Improvement, ed. Lorna M. Earl and Helen Timperley. New York: Springer.