Published November 3, 2021
A confession: despite the title, I don’t really love course evaluations. If two years of answering emails for the campus-wide course evaluations system has taught me anything, it is that I am not alone. They can be a source of enormous stress, a confusing labyrinth of ever-changing user interfaces and new tools that muddy the quest for clear answers.
I have been on committees (fortunately at other institutions) where they were used for purposes to which they were approximately as suitable as a driving test is for diagnosing COVID, and I am fully sympathetic to the horror stories I have heard in assessment conferences of how misapplied they can be generally and how hurtful they can be personally. Particularly when I started teaching, at another institution a decade ago, they were presented as a simple and immutable truth, a conversion in a Barthesian sense of the qualitative to the quantitative that was dropped in our laps as the sine qua non of understanding our efficacy as instructors.
That such an understanding is rubbish in no way obviates the real stress it can induce. However, a more granular understanding of why it isn’t the case opens the door to thinking of the systematic collection of student feedback in more generative ways—in particular, to understanding how it can be used in general and how to make them your own in ways that improve your course design and classroom instruction.
The terminology distinction there is crucial. As Dr. Justin Hoshaw of Waubonsee Community College put it recently during a presentation for the IUPUI 2021 Assessment Institute, “course evaluations” implies a holistic assessment of the course—literally, an evaluation. What we are talking about, instead, is a formalized tool for gathering and comparing student feedback. This allows for a slew of frankly neat things, including tracking trends, evaluating certain elements of the course, and even running validity assessments on the data itself to uncover new possibilities in teaching, on the one hand, and providing checks for unconscious bias in the results on the other (more on that in our next post!).
What it should not do, but sometimes seems to, is lend an extra veneer of immutable authority to the results. There are many things that student feedback cannot fully capture, from the actual expertise of the professors in their fields to the variety of institutional and programmatic necessities that often shape courses. Anecdotally, I can say some of the most frustrated and negative comments I received from students, especially early on in my career, were from those who did not seem to understand that a master’s student on a T.A. line did not have the power to change the printer password, let alone override the Board of Regents on the curriculum.
This ties closely to the fundamental misapprehension I, at least, was under until I started working for CATT’s Educational Effectiveness and Learning Analytics team (then the Office of Educational Effectiveness): that the student feedback in course evaluations was gathered primarily due to an edict from on high, its policies governed by administrative fiat.
In fact—at UB, at least—campus-wide course evaluations exist because of a Faculty Senate resolution and are governed by a committee made up of representatives chosen by each school. These members of the UBCE Advisory Committee advocate for their schools’ needs and set policies in the best interests of their instructors. Our office oversees the day-to-day administration of them, something between an IT help line and a living F.A.Q. page, and sometimes we can provide useful insights, analyses, and tips for what to do with them. But the policies and common questions were written by faculty to help faculty.
So, with all of that in mind, the question is what to do with them? How can you, too, learn to love (or at least not dread) course evaluations, metaphorically ride them out of the chute like Slim Pickens at the end of Dr. Strangelove (with, we trust, a happier implied ending)?
The first thing is admittedly the most difficult: a mindset change from viewing them as an exterior, closed entity to being more student feedback, hopefully some among a great deal. These evaluations are designed by faculty for faculty use and overseen by faculty-led representation; they are here to help you. It is an admittedly higher stakes set of feedback, since there is no getting around the reality that it is used for formal purposes, from being a metric for assessing a program’s health to promotion and tenure dossiers. But viewed as faculty-solicited student feedback, it becomes an extremely valuable tool for improving your course and instruction, which is after all a goal of nearly everyone I know who has stood in front of a classroom or sat behind a monitor with the incredible opportunity and responsibility for helping students master learning objectives.
To that end, it is okay, even desirable, to talk with your students about the kind of feedback you are looking for—if you are wondering if an assignment sequence made sense to them for the learning outcome it was designed to help master or whether the technologies you are using are the best way of facilitating learning, these are a perfect place to hone in on those questions. To that end, we allow departments and even individual instructors to add questions (only a few; survey fatigue is real!). We even link to the PICES inventory of questions, carefully formulated and researched questions designed to get at different facets of a course most effectively.
The other thing, and something I repeat ad nauseum, is simply a truism of data: the larger the sample size, the more reliable the results. Encourage—push—your students to complete them. It would not make sense, to me, to sacrifice fifteen minutes of class time to get the same meager number of responses, but when that same thing can push your overall response rate from five out of fifty students to thirty-five, it makes the results that much more representative. It may also help alleviate the common worry that only the least and most satisfied students will take them. There are many other ways to encourage your students to provide feedback; two stand out as painless to implement but delivering real returns. First, talk to your students about how you have used feedback to improve your course in the end. It is hardly surprising, but students who feel that their feedback matters are more likely to take their responsibility to provide it more seriously. Also unsurprisingly, the more you get useful feedback from students and incorporate it, the easier this becomes to discuss.
The second is tautological: encourage your students to complete the evaluations by encouraging them to take the evaluations. It makes a big difference when students get an email directly from their instructors (which can be done easily through SmartEvals, with the option of only contacting those students who have not yet completed them). I get enough emails from students in response to the automatically generated requests to know that a faceless office carries far less weight than does an instructor they have come to know and respect. A few follow-up emails and in-class reminders from you can make all the difference.
There are a number of additional tips and tricks to getting the most out of course evaluations as an instructor, including a series of videos on the new report wizards that make it easier than ever to view data trends and turn your feedback into actionable insights. In about two weeks, I will be back with our team director, Dr. Cathleen Morreale, to discuss updates in the world of student feedback and projects we will be rolling out over the next few semesters to make student feedback work even better for you.
Office of Curriculum, Assessment and Teaching Transformation
716-645-7700
ubcatt@buffalo.edu