Data and departmental change

Originally published on Toward Decolonizing Physics

Physicists care a lot about data. We rightly value statistical rigor and reproducible results, and are rarely content with just 1 or 2 sigma. When it comes to departmental change, we are no different: across multiple departments, one of the most consistent demands I have seen is that departmental change and processes be informed by data.

As someone who values informed approaches to decision making, I am grateful that physicists see the value of basing practice on data. One tendency that deeply saddens me is that some young activists have begun to eschew (especially quantitative) data collection on the basis that it is fundamentally oppressive or a distraction from the change. On the other hand, I believe it is also important for physicists to interrogate our own assumptions about data and how they may be helping or hindering departmental change.

The importance of data

If my legacy of departmental changemaking in physics has taught me one thing, it is that we don’t want to be making decisions blind. Data collection is particularly important for three main reasons. First, systematic data collection can unearth problems or opportunities that would not be anecdotally obvious; a major example of this in physics education research is the discovery that traditional lecture methods result in poor knowledge retention, and the shift to active learning at many universities as a result. Second, data collection is important for understanding whether (and why!) diversity and climate initiatives succeed or fail — post-surveys of a weeklong immersive physics education program I helped create demonstrated agreement in which elements of the program were particularly effective or ineffective, informing future changes to the focus and content of the program. Finally, for better or for worse, it is far easier to acquire funding for initiatives and counter institutional resistance if data shows a clear and compelling need for the proposed program.

Having data, however, is not enough; to be useful for decision-making purposes, the quality and (to a bit lesser extent) quantity of data matters immensely. To be useful for this purpose, data must:

1) Be sufficiently comprehensive, and broken down to sufficient detail. A climate survey is helpful, but without breakdowns by race and gender it will only highlight problems affecting even white men in the department. Statistics on the diversity of an incoming graduate class are relatively meaningless without data about the applicant pool; if we have few female graduate students, is it because women aren’t applying, or are being disproportionately rejected by the admissions committee, or are being turned off by visit weekend?

2) Be kept consistently over time. In classical dynamics, it is not enough to specify the initial positions of all particles; we must also specify the momenta. The same goes for diversity and climate data — without regular long-term data collection, it is impossible to determine whether initiatives are succeeding or failing. Long-term data collection also helps wash out noise caused by small-N statistics.

3) Draw from both quantitative and qualitative measures. This criterion irks both activists and traditional-minded physicists alike. Many physicists, leery of the social sciences, will not listen to qualitative data as they feel it is not rigorous. Some activists, meanwhile, express an understandable disdain for quantitative data given perceptions of its abuse. In reality, neither type of data can be ignored.

A focus only on quantitative data will always fail the most marginalized physicists; as a consequence of intersectionality, the experiences of multiply-marginalized physicists (who are usually too few in number to capture with statistics) cannot be sufficiently extrapolated from subgroups. Qualitative measures, such as interviews and free-response surveys, can detect problems that quantitative measures would overlook because they affect small populations (e.g. Black women, or female graduate students in a specific research group). Qualitative data alone is not sufficient, either — quantitative measures are necessary for comparison over time and across institutions. Though flawed, diversity statistics and well-validated standardized assessments (which should not be used for evaluation or admissions, but I digress) are some of the only reproducible metrics we have available. The key is to interpret both quantitative and qualitative data in the context of the other, noting that neither alone can paint the entire picture.

Fallacies with Data

As stated above, data can be a useful tool for informing departmental change, but attitudes and practices around data can also sometimes get in the way. In my experience, the following fallacies are particularly common, but are in no way a comprehensive listing. Some stem from understandable attempts to project data practices from experimental physics onto the messy world of humans and social science; a few are widespread biases plaguing physicists and non-physicists alike.

  • Insisting that absence of data implies absence of need. Data can be a compelling tool to identify the existence of problems, but even the most comprehensive data set can only measure what it is designed to measure. Sometimes, more data is needed to understand a problem, but stalling for lack of data is often just an excuse to avoid progress. Often critical theory or anecdotes of student experience can speak for themselves, at least well enough to get started.

  • Focusing exclusively on quantitative or qualitative data, to the exclusion of the other. An exclusive focus on quantitative data is far and away the more common problem, but I have seen both. As stated earlier, both provide insights the other can’t, and a focus only on quantitative data will never capture the experience of multiply-marginalized physicists well.

  • Insisting that more data is always better. Yes, more data is usually better, but not always. If there’s already enough information to make proactive change, insisting on more data is simply a delay tactic. Besides, it shouldn’t take a 2 or 3 sigma result to prove existence of departmental racism; amid a racist society, it is often safest to assume the existence of inequities until proven otherwise!

  • Insisting that more data is always better, part 2. It is now common knowledge that the physics GRE correlates more strongly with race than it does with ability in graduate school. So why do so many physics departments insist on retaining it? One argument I have heard across multiple institutions is that decision-makers should have as much data as possible at their fingertips, even if it is flawed. This is emphatically not true; humans are not perfectly objective, and are susceptible to being swayed by dangerously irrelevant or misleading data. Especially for evaluative purposes, the risk of including biased data far outweighs any benefits it has.

  • Conflating individual and population-level metrics. Certain metrics, though reliable enough to understand the collective behavior of groups of people (e.g. entire departments or institutions), fail dangerously when used to predict the performance of individuals. Standardized test scores from physics education research assessments, for instance, are useful for comparing the performance of specific classes but dangerously poor measures for evaluating individual student performance. Names or sex assigned at birth (e.g. from legal records) can serve as a reasonable proxy for gender diversity at the level of departments if no other data is available, but should never be used to predict someone’s pronouns. Remember, the central limit theorem holds for large populations but can fail catastrophically at small sample sizes!

I encourage all of us to keep a special look out for these fallacies in ourselves; it is far easier for most of us to see others’ biases than our own. Having these biases does not mean anything is wrong with you — it is part of human nature! — but recognizing them is the first step in working productively past them. In addition, recognizing the existence of these fallacies can be useful as an activist working to make change; we can recognize them in others and gently and humbly find the common ground to work past them.

Conclusion

To the extent it’s available, data is an essential part of any effective departmental changemaking work. Data is not the end-all-be-all capitalist society makes it out to be, but it is also a useful tool that cannot be neglected. The key is to collect sufficient (and sufficiently high-quality) data to build an informed perspective on the issues, while remembering that data in the absence of interpretation does not establish truth. And by being mindful of how we interact with and interpret data, we can avoid common fallacies and use data wisely rather than reactively in our work.

Previous
Previous

Effective internal education: Know your audience

Next
Next

On the revolutionary potential of women in physics groups