[
academia
]
I have been a little preoccupied lately, trying to get my independent research venture from concept to positive cash flow while also grieving the unexpected loss of my father.
Fernando was a hard worker and even tougher man – as a bristly ex-Marine, what should you expect? But, he had a huge heart. He actually would give you the shirt off his back if you needed it, probably while also yelling at you for getting into a situation where you needed his shirt in the first place.
Did I mention he was a hard worker? He spent his entire career as a skilled laborer for Ford Motor Company. Long hours. Overtime. Midnight shifts. He was one of the fortunate ones, an underrepresented minority with access to the US middle class via a well-paid, union negotiated factory job. By extension of being his child, I was even more fortunate, with the chance to pursue academics without the pressure most minorities and/or children of immigrants face: trying to better myself while also supporting my parents. That was his burden and I am forever grateful for his support. I love you, Papi.
This preamble leads me to a post I wasn’t expecting to write just yet, as I want to start diving into STEM-related topics. But my father’s passing, my proximity to recently leaving my academic position after years of increasing disenchantment, and the recent headline news about the esteemed retired Princeton Professor turned NYU lecturer getting fired because pre-med students petitioned that his class was too difficult, kind of put this front and center in my mind.
If you haven’t heard or read the story, it popped up pretty prominently onto my LinkedIn and Twitter feeds this past week. The New York Times published a story about Dr. Maitland Jones, Jr., a highly regarded chemistry professor, author of a staple organic chemistry textbook, and a pioneer of problem based learning in chemistry education. He was unceremoniously (ahem) sh#t-canned from his NYU lecturer gig after a bunch of his students complained and petitioned to NYU administration that he was being too strict, harsh on grading, and dispassionate to his student’s concerns. Read the full story here if it hasn’t been pay-walled yet.
The truth in these stories is always more nuanced than our lizard brains at first blush allow. And the social media response was, as you would expect, polarized, however I found most comments leaning towards “kids these days” and “China is going to pwn us.” Here are a couple postmortem followups I saw on Twitter, which includes commentary from Dr. Jones:
Again, apologies in advance if these lead to paywalls, but I think the issue here can be broken down into three things:
- The weight of student evaluations on making contract renewal decisions
- The change in standards for learning and grading, i.e. “student-focused” versus “setting a high bar for students to reach” versus the “consumer model”
- Administrators interfering with departmental autonomy in curriculum matters
I’d like to talk a little more about the first two. The third item is a bit of a perplexing issue, requiring further study, because Dr. Jones was a contract lecturer and not tenured, so I suppose a Dean or chief academic officer like the Provost can jump into the mix and cry breach of contract or even just terminate it since contingent faculty have limited protections. Also, based on the wording of the original contract with Jones, administration can pull all kinds of tricks that supersede the rightful authority held by the NYU Chemistry department to deal with this issue internally. I had some contract nonsense pulled on me, which ultimately fast tracked my decision to leave. (I’ll spill the Bustelo on that a different time.) It makes you keenly aware of the faculty/administrator divide.
Additionally, the NYU Dean’s office offered a retroactive withdraw to injured students. That’s unheard of or at least highly unusual. Curriculum design, delivery, and grading are the purview of the faculty. The Academic Freedom Alliance issued a statement on this very delicate issue:
Anyway, on to the two other issues that seem to be plaguing higher ed and the newest generations of customers… I mean students.
Student Evaluations
As a former professor who received evaluations from students every semester, I rail on them every chance I get. Thinking about evaluations still triggers me. Why? They are mostly useless and I cared way too much about what were in them. You either get a rave review or you get lambasted, often by students in the same cohort. The juxtaposition of responses really made them difficult to find useful.
If you are not familiar with modern student evaluations, they often contain questions like:
- How prepared was the instructor for class?
- How enthusiastic was the instructor about the subject, in-class or online?
- How clearly did the instructor present ideas and theories, in-class or online?
- How concerned was the instructor for the quality of his/her teaching, in-class or online?
- Did the instructor inspire confidence in his or her knowledge of the subject, in-class or online?
- How genuinely concerned was the instructor with students’ progress?
Results are scored on a Likert scale rating system and then your score is compared to other faculty in your college. And as I alluded to earlier, also expect them to be answered almost completely opposite depending on the student. You see, since these evaluations are not required, only encouraged, you almost always get student responses from those that either “loved” you and the course or from those that “hated” you and the course. And if it is a required course for the major or it is used to fulfill a university requirement, you really are at the mercy of the student’s other interests and commitments that semester. On top of that, you are asking students to rate you using completely subjective markers. How enthusiastic was the instructor? Really? Students, unless they are majoring in education, have no training or experience in pedagogy or learning practices. And while it is true that some faculty care more about teaching than others (and students can easily pick that up), the evaluation is hardly an effective means to assess the capabilities of an instructor. For an institution like Academia that highlights peer review and external professional evaluation for accreditation, one would think resources would be in place to do this for instructor evaluation.
Universities also give students the opportunity to leave written comments. They can be signed or anonymous. They can often be funny, embarrassingly praising, or downright mean. This is not a very scientific way to gain feedback, especially since you can end up with a sample size of a few percent. The whole thing is like Rate My Professor, but they actually have been used by administration to determine tenure, promotion, and size of salary increases.
Teaching + Cookies = Better Reviews
A research paper that I always like to reference when taking issue with student evaluations is one that was done at a medical school. The title of the study is, “Availability of cookies during an academic course session affects evaluation of teaching.” Here is the abstract:
Objectives: Results from end-of-course student evaluations of teaching (SETs) are taken seriously by faculties and form part of a decision base for the recruitment > of academic staff, the distribution of funds and changes to curricula. However, there is some doubt as to whether these evaluation instruments accurately measure the > quality of course content, teaching and knowledge transfer. We investigated whether the provision of chocolate cookies as a content-unrelated intervention influences >SET results.
Methods: We performed a randomised controlled trial in the setting of a curricular emergency medicine course. Participants were 118 third-year medical students. > Participants were randomly allocated into 20 groups, 10 of which had free access to 500 g of chocolate cookies during an emergency medicine course session (cookie > group) and 10 of which did not (control group). All groups were taught by the same teachers. Educational content and course material were the same for both groups. > After the course, all students were asked to complete a 38-question evaluation form.
Results: A total of 112 students completed the evaluation form. The cookie group evaluated teachers significantly better than the control group (113.4 ± 4.9 versus > 109.2 ± 7.3; p = 0.001, effect size 0.68). Course material was considered better (10.1 ± 2.3 versus 8.4 ± 2.8; p = 0.001, effect size 0.66) and summation scores > evaluating the course overall were significantly higher (224.5 ± 12.5 versus 217.2 ± 16.1; p = 0.008, effect size 0.51) in the cookie group.
Conclusions: The provision of chocolate cookies had a significant effect on course evaluation. These findings question the validity of SETs and their use in making > widespread decisions within a faculty.
I mean, there you have it, free access to 500 grams of cookies and your chances at good evaluations go up! I’m sure more studies would confirm this. I even did an unscientific version of this one year pre-Covid. I brought in donut holes and I had some pretty decent evaluations that year. There has to be a better way to do course evaluation that does not involve such a strong weight placed on evaluations.
[Click for reference url]
Students as Consumers
My good friend and colleague Rob Sanford would always say something like, “Academia is the only industry where the customer complains when you give them too much product for their money.” He recently retired and I followed him out the door. I had always joked that when he left I’d go, too. We always traded our own stories and experiences, him as a full professor and Chair of the department and me as a contingent faculty member struggling to get my contract renewed until I reached “just cause” status (just cause at my institution had protection similar to tenure but admin could make your life miserable much more easily). We commiserated often and even shared our experiences in a podcast now on a likely permanent hiatus, called The Contingent Professor. We even did an episode on this very subject (student as consumer). You can find it embedded here (and on Apple Podcasts):
There is a whole field of study looking at the dynamic of “Students as Customers” or “Students as Consumers.” A Google Scholar search with either of these keyword reveals a plethora of investigations looking into the administrative angle, the marketing angle, and the instructor or faculty angle and how this impacts learning and behaviors, both of learners and instructors. A lot of this boils down to a continued narrative that in order to enter into the workforce post 1990, you need to get a college degree. It is the ticket needed to get a good job. Because manufacturing jobs, like the one my father had, continue to be moved out of the country, likely replacement jobs necessarily come from legal, medical, education professions and other service industries.
So the logic is “if I’m paying for my education as a means to enter the workforce, then I have certain expectations as the customer that should be met by the education service industry.” I’ve actually witnessed students in the hall talking to each other saying “I’m paying for the professor’s salary, so he/she works for me.” At a public university, that’s not so true and I argue that my state taxes and everybody else’s taxes goes into paying for infrastructure and services that are a common good, so actually, students are working for the common good. But that’s a whole other debate. At private institutions, my argument hold less water and this notion of entitlement I think is not without some merit. The parents and the student are often paying some or all of the money into the university or college and they do have some say into future money as alumni. College tuition is not cheap, so these are massive contributions to the institution.
At issue is: do we go to college to become well-rounded, maturing adults ready to contribute to society in all of the ways that a citizen can, or are we going to college to get training and/or credentials to get a job? That’s the crux of the market-based system that has been established in the higher education marketing machine.
On the academic rigor side of the issue, if we believe we have a choice in what school we can attend (assuming we are accepted), and what we study, then we have to accept the contract we make with that institution. If one wants to be a doctor then we have to follow the curriculum laid out by the profession. If one wants to be a chemist, same deal. And, the rigor of that curriculum can’t be sacrificed to appease the needs of the students… unless we truly are embracing a customer model. In that case we need to standardize the way we do education.
And I tend to agree with the notion that weed-out course aren’t designed to be weed out courses. They’re just hard. Differential equations is hard, organic chemistry is hard. Can an instructor make your learning experience even more challenging? Absolutely. But, ultimately, if you are taking classes that require time, effort, and brain power much greater than other courses, then it’s likely to be a tough A or even B. Just as world class athletes need to put in the time to be good enough to be world class, one would expect world class physicians also need to meet a rigorous threshold. And the beauty of the USA is, you can still do something pretty fantastic even if you don’t get into medical school. The possibilities are unlimited, even with the obvious discrepancies inherent in the education system due to inequalities of all kinds.
Well, I’ve written way too much as it is. If you are interested in reading more on the “Student as Customer” dynamic, hit the search button and prepare to read a lot. Oh, and also check out this nugget I found on the Rise of Review Culture , an industry report by brightpearl and trustpilot. It explains a lot about the culture of students and parents academia is interacting with now.