A slough of research has come out in the last few years (and there’s more forthcoming from my collaborators and me) showing that these end-of-semester ratings that students give teachers, usually on a scale from 1-5 or so, are significantly biased against female professors. The obvious question is: if not student evaluations of teaching (SET), how should we evaluate instructors? I recently saw this article on Twitter. It argues that “female faculty should receive an automatic correction” on their SET scores, meaning that the administration would add a fixed number to every female instructor’s score in order to make it comparable to male instructors’ scores. This adjusted score would be used to decide whether the instructor should be rehired to teach, be given tenure, etc.
I don’t believe this can be done, for a number of reasons. There are other biases and confounding variables besides gender that make it impossible to find a single number to add to every female instructor’s score.
- Biases are not consistent across fields. For example, at Sciences Po in Paris, there is a greater proportion of female instructors in sociology than in economics, and the observed gender bias is less in sociology than in economics. Any correction to SET would have to vary by course matter.
- Biases depend on student gender as well. Our research shows that in some schools, male students rate their male instructors significantly higher than their female instructors while female students tend to rate them the same. This is a problem for adjusting scores because the gender balance in the class will affect the instructor’s score. For instance, imagine a hypothetical male instructor who teaches two identical classes. On average, his male students give him a rating of 4.5 and females give him a rating of 4. In the first class, the gender balance is 50/50, so the average rating is 4.25. In the second class, there are 80 males and 20 females, so the average rating will be 4.4. There’s no one magic number to add or subtract from this average to cancel out the gender bias when comparing this score to the SET of other instructors.
- There is some evidence that SET are biased by the instructor’s race and age as well. We lack data on this, but similar work on bias in hiring decisions has showed that people (men and women alike) comparing identical resumes will tend to prefer job applicants with male, European-sounding names. Anecdotally, instructors who have accents or are above average age (even as young as mid-thirties in some places!) fare worse on their SET.
The list could go on — I’m sure there are a ton of other confounding variables, like time of day of the class, difficulty of the course material, etc. which affect how students tend to rate their instructors. In order to find a correcting factor for each female instructor, you’d have to look at all of these variables and average them out. In fact, you ought to do that for male instructors too, since gender isn’t the only bias. This just highlights the fact that SET aren’t measuring teaching effectiveness in the first place; they’re a better measure of how comfortable or satisfied a student is in the class.
Admittedly, the title of this post sounds combative. But it’s not — of course something needs to be done about the pervasive gender bias that’s causing female faculty to lose teaching positions and costing them job promotions. I’m merely arguing that it is impossible to effectively “correct” for gender bias, and so alternative, more objective means for evaluating teaching effectiveness should be used instead of SET.
An interesting editorial on research practices came out in PLOS Medicine yesterday. It’s good to hear about reproducibility and reforms we need to see in science from a fellow statistician, John Ioannidis over at Stanford. Each discipline has its own quirks and accepted practices, but statistics is a common factor in every study. I believe we statisticians have a unique perspective on the problem: we get to play the role of data advisor on other peoples’ studies and the PIs on our own.
Ioannidis cites examples of things that work in several fields, including reproducibility practices, data registration, and stricter statistical analyses. Then he proposes a new “structure of scientific careers” that doesn’t just favor old men with fancy titles and big grants. In this framework,
Resources and power are seen as opportunities, and researchers need to match their output to the opportunities that they have been offered—the more opportunities, the more the expected (replicated and, hopefully, even translated) output. Academic ranks have no value in this model and may even be eliminated: researchers simply have to maintain a non-negative balance of output versus opportunities. In this deliberately provocative scenario, investigators would be loath to obtain grants or become powerful (in the current sense), because this would be seen as a burden.
I got to this part of the article and thought, “Wait, this sounds crazy?” It almost seems like there would be no incentive to work hard, like any award would come with some negative consequences and you’d be punished if your work didn’t produce results. Isn’t that exactly what research reforms are trying to get around? Maybe a greater emphasis on sharing negative results would get around this problem, but I digress.
After reading this the first time and feeling my knee-jerk disagreement, I took a step back and realized that my negative response is precisely due to my being immersed in the current culture of “publish or perish” and academic hierarchies. I’m so entrenched in this way of thought that it’s hard to see other models for scientific careers. However, I’m on Ioannidis’s side and I believe we need to seriously rethink the way research is done in order to have more high quality results.
Frankly my commentary on the subject is pretty useless because it’s a hard question and I’m no expert. You should just go read the article here.
I’ve been working on the same project on and off for a bit more than a year now. From the get go I knew I’d need to document my steps, so I started using a little green spiral notebook to keep track of what I did each day. 15 months later, it’s time to write up the project and I’m shocked by the notes I’ve kept (sparse and not helpful). It’s not so hard to find the code you need when you wrote it several weeks ago, but how about the code you wrote 6 months ago? And when you find it, how do you use it? What inputs do you need to supply and what outputs does it spit out?
Unfortunately nobody teaches you how to research efficiently; I’ve been learning as I go along. Since starting this project, I’ve learned what doesn’t work for me: naming files by date. This is a convention I started using when I saw a mentor of mine doing it a while back. Frankly I don’t know how he made it work for him. The problem is pretty obvious; you don’t know what’s in each file until you open it. I suppose it’s a good practice for version control, if every time you modify a file you save a new copy with the date. But even then, when you go back and try to find the right code, how do you know which one to choose? It also results in a lot of duplicated code taking up memory on your computer. I’ve only found this file naming convention useful when I also summarize the file contents in my spiral notebook. Unfortunately, I didn’t have enough self-discipline to do that consistently.
What has worked for me so far is keeping a “Methods” subdirectory in my main directory for the project. Maybe “Methods” is an inappropriate name, as my folder includes presentations for meetings and intermediate results. In there, it makes sense to date files so there is a chronological work flow. Again, I wasn’t consistent about keeping the files in this folder up to date, but the notes I did make as I went along have been immensely helpful.
Where to go from here? I’ve learned a few things along the way:
- Automate as much as possible. When you write a script, test it out as is, but then once you’re convinced it works properly, wrap it in a function. You will inevitably have to rerun your script, maybe on many different datasets, and it’s useful to have it in a function where you only have to change the inputs once. Along the same lines, try to avoid one-time scripts or running things at the command line. These moves may be faster at the moment, but they’re not reproducible and will give you a headache later on.
- Write in your lab notebook consistently. Self-explanatory. I wish I’d read this earlier: Noble suggests keeping an electronic “lab notebook” in the same directory as your project. I like this idea because then you can include plots, bits of code, output, etc. and it is easy to share with others if need be.
- Comment code profusely. In Python, it’s good practice to include a “docstring” at the beginning of every function, enclosed in triple ”’. Do the same in any language: describe what your function does, the function’s inputs and outputs, and any packages that are required for it to run.
I think this quote from the linked article sums it up:
The core guiding principle is simple: Someone unfamiliar with your project should be able to look at your computer files and understand in detail what you did and why.
Right now, that unfamiliar someone is me. May the next project go better!