Mindful Measurement: Reflections on Expertise, Power, and Biases in Research
As the world around me becomes more robotic and impersonal, being mindful about what I do professionally, think and feel has become increasingly important. Prior to COVID-19, my entry would have looked far different than what it looks like today. The one variable that has surfaced as more important than anything else in the work I do is the public education we provide about the interpretation of numbers and coming to our own conclusions based on the evidence. As I watch the most Orwellian set of events unfold around me, I see so much vulnerability in people who are unwittingly accepting what experts say and being seduced by numbers that require far more interrogation than the media or even experts are willing to provide. The lack of critically-minded numeracy shown by laypeople and even experts alike is mind-blowing.
In moments of social strife, mindful measurement takes on an alarming importance. Everyone is looking for accurate measures but more importantly, they are also looking for thoughtful and meaningful interpretation. Mindfulness about how numbers are presented and interpreted becomes vital to ensure that those who depend on us are fully aware of methods of counting, limitations with said methods and more importantly options for courses of action (i.e., risk-benefit and consequential validity). Mindful measurement, then, is much more than just expertise in how measures are created, administered, and summarized. Mindful measurement is about understanding the power relationship between the “expert” and the “consumer” or “public” and not leading the public down a narrow path of options that reflects more of what the expert thinks than what is freely chosen by an accurately-informed public.
In light of the way I have broadly defined mindful measurement, I have become far more critical about interpretations provided by experts generally, the tacit biases they may hold, and how I come to my own conclusion about the data. This has obviously led me to consider how I present my own analyses and framing of numbers, and the importance of revealing my own biases. When I work by myself, then, I ask myself more big-picture questions such as: Who benefits from interpreting the results in this way? What are the assumptions that underlie the measurement of a given construct (e.g., educational achievement) for the purpose of associating it with other constructs (e.g., types of testing, educational programs)? Why is it being done this way and not another way?
When I work with colleagues, I have started to ask more basic questions. Not because I do not know the “boiler plate” responses to these basic questions but because I want to know if we have all thought deeply about why we are following a typical course of action. For example, at a recent conference I publicly questioned the truthfulness of much of the data collected about students’ social and emotional experiences in school. I am increasingly suspicious that students comment freely on the questionnaires we administer to them. Why do I think this? Despite the fact that I can hire research assistants to collect data, I often participate in data collection to observe the contextual conditions of the administration. More now than ever before, I engage in impromptu conversations with students who are responding to our measures and ask them what they think about these and whether their responses reflect what we aim to measure. This requires developing a relationship or rapport with students.
For example, at a recent data collection initiative, we administered closed-response surveys to elementary school children to find out about how they felt receiving assessment feedback from teachers. The students offered their “official” responses and then we probed these responses a little further. One of the interesting observations from these probes was that many elementary students knew what the “correct” responses on socio-emotional questionnaires entailed. Schools with strong narratives appear to tacitly influence the range of adequate responses that students provide in response to their experiences. This has led me to focus on children’s rights in the classroom, the ease (or lack thereof) with which they can share their thoughts, and how school officials often act as gatekeepers between researchers and the students they wish to understand and help emotionally and academically. These situations and obstacles have pointed me to ways of working with students directly (via parents) and not schools, and developing clinical interviews instead of “boiler plate” surveys to probe students’ authentic thoughts, experiences, fears and joys.
This process from using traditional measurement to more mindful measurement has led me to not only focus on survey/item design but also on the situational power variables that influence, interfere and ultimately can distort the results I used to accept at face value in the past. When I work with graduate students nowadays and they allude to expert opinion, I often ask “how do you know this is true?” Have you thought about the logic behind this statement? Could you sketch out the structure of the argument for this claim? Or are you simply trusting that experts know best? Some fo this has to be done but too many now have abdicated their own common sense in the process.
When I became interested in research and specifically, as a professor, teaching others to think about and conduct research, I assumed that open and critical thought were givens – even if it meant challenging the status quo. This is why academic freedom is so vital in my view. As our learned societies become more specialized and we depend on outside expertise even more for everything from car repairs to medical information, there is temptation in giving up the expenditure of asking questions – even among researchers. I understand the trepidation. When we question experts, and suggest that the logic of a given analyses may be incomplete or fall short or offers a wider set of options than those provided, there is a tendency to undermine the non-expert in the area for ignoring the obvious expertise. But I disagree. The onus is on experts to be open and transparent about what they claim and why they are making such claims. As experts in measurement, then, mindful measurement is about realizing that being defensive about what we believe to be true is the death of open and critical thought. Thus, my responsibility and recommendation to myself and others is not just to be mindful of the measurements we develop and the conclusions we draw; but also, to be in constant vigilance of what others measure, claim, and peddle in the name of science.