Interpreting End of Semester Course Evaluations

Introduction

​​​​​​​Asking students to evaluate their courses is standard in most universities and colleges. Even though there has been some controversy regarding the practice, and questions have been raised about the validity of the evaluations as well as about their susceptibility to bias, they have become a regular source of feedback for instructors.

These evaluations may be most useful when they were considered in a more nuanced way than as being just positive or negative. By reviewing the data with an open mind and an understanding of their complexity, it is possible for instructors to use student evaluations as an aid to their growth and development as a teacher. Although not all feedback may be constructive, and it is easy to take the feedback personally, it is worth carefully analyzing the information and reflecting on what it reveals. Reviewing  the results systematically and applying them judiciously may lead to improvements in both course planning and teaching skills.

Alliant Process

At the end of each semester/term/trimester, Alliant students are asked to fill out an online course evaluation form for each course in which they are enrolled. The intent of these evaluations is to provide instructors with information that will help them revise a course and improve their teaching, if needed. A copy of the current survey can be found at the Alliant Course Evaluation Process SharePoint site.  That site also provides a comprehensive description of the course evaluation process as well as an FAQ on the topic. In addition, the site includes a link to the Watermark Course Evaluations & Surveys (formerly EvaluationKIT) dashboard for faculty. 

The survey questions were revised and implemented on the March/Spring 2024 terms and comprises of 23 questions that use a 5-point rating scale, ranging from Strongly Disagree (1) to Strongly Agree (5). The questions are divided into the following three categories: course objectives and outcomes, course and course content, and the instructor. In addition, there are two final open-ended question that asks for additional, overall comments, and/or recommendation about the course and instructor. 

The survey is available to students 10-14 days prior to the end of the semester/term/trimester, and the Office of Educational Effectiveness recommends that instructors do the following:(a) before the survey period, review the courses and surveys that are upcoming for them; (b) during the survey period, monitor the response rate; and (c) after the survey period, review the summaries for each of their courses. 

Approximately two weeks after the close of the survey and after grades have been reported, the results of feedback are provided to all instructors for every course they taught in the previous semester, term, or trimester. Feedback includes the following information: (a) overall response rate; (b) a summary graph of response means for instructor, school, and university for eight key questions (“At a Glance”); (c) for each question, a frequency distribution of responses, a bar graph with data on the instructor’s mean score and the mean scores for the school and the university; (d) additional information on question mean, standard deviation, and median for the instructor, school, and university as well as response rate; (e) Global Index Summaries (means of the ratings for the Course Objectives, Course Materials, and Instructor questions), and (f) narrative comments. 

Alliant Course Evaluation Process SharePoint site 

Alliant Course Evaluation Survey 

Course Evaluation System FAQ Faculty 

Watermark Course Evaluations Dashboard 

Interpreting Student Feedback

As in most schools, the Alliant evaluation include both quantitative and qualitative sections.

Interpreting Quantitative Results:

  1. The feedback report provides information on means, medians, and distributions for each question. The most helpful approach to understanding the results is to begin by examining the distribution, looking at the percentage of students who experienced this attribute as a strength. Begin by looking at how the majority of students responded. If they agreed  with a statement, that implies that, in general, students experienced that aspect of the course positively. If they disagreed, that suggests this is a problematic area. Because most faculty nationwide receive ratings toward the upper end of the scale, many scores at the midpoint of the scale may also be a signal that students perceived significant challenges with the course or instructor.
  2. Determine whether the class breaks into groups. Are there clusters of high scores and low ones, with little middle ground, that reflect two different experiences? Why might that be? Did these groups have different TAs? Meet at different times? Have different reasons for taking the course?
  3. Means can be compared to those of the school and university as well as to those of other courses and the same course over time. CSPP courses are a majority of courses in the semester system, and CSOE courses are a majority of courses in the term system, so the school data approximates Alliant data for those two schools. (It is important also to examine medians because distributions may be skewed, especially if the number of number of respondents is low).
  4. Look for patterns in scores that are highest and lowest across courses. Are strengths and weaknesses consistent across courses, or are they unique to specific courses, depending on level (introductory vs. advanced), size (lecture vs. seminar), type (required vs. elective) or modality (on ground vs. online)? Some of these course characteristics have been found to be related to students’ evaluations, but it may also be that some courses are a better fit with an instructor’s teaching style.
  5. Look for trends over time. If there are changes, why might they have occurred (e.g., changes in course structure, activities, or assessments; changes in material covered; different student populations; different time of day taught; different modality)? The university also provides data on faculty means by question since 2005, which may can be used for comparisons. ​​​​​​​Course Evaluation Means by Academic Year
  6. Do not focus on composite scores. Collapsing all items into one score assumes each item is of equal importance. Similarly, global items (which Alliant does not use) are generally too broad to interpret their meaning. It is noteworthy, however, that scores on global items are strongly related to ratings on questions that ask for feedback on clarity and organization.

Interpreting Qualitative Results:

  1. Comments can be helpful by providing a context for ratings as well as specific suggestions for improvement; however, they can also be difficult to interpret if they are vague and general or contradictory. Comments can also be hard to interpret when they reflect isolated student experiences. Therefore, in interpreting comments it is helpful to group them into themes and count the number of responses associated with each theme. The Iowa State University Center for Excellence in Learning and Teaching provides a work sheet for analyzing student comments along with a set of detailed instructions for its use. Comments Analysis Worksheet
  2. Instead of focusing on either positive or negative feedback, it is useful to consider the range and specific content of responses students provide. Look for specific ideas, rather than just focusing on whether they are favorable or not. Issues can be divided into those that are actionable and those that are not.
  3. Look for patterns here, too. Are there issues that have been raised (either positively or negatively) by multiple students? Do several people agree on a strength or weakness of the course? Do several people bring up the same thing—a particular assignment or class activity, for example—either liking or disliking it. In this context, critical comments can be helpful if they point out aspects of the course that did not go well but can be changed in the future.
  4. Determine whether the comments are consistent within the class, indicating agreement among students. Do they cluster into several subgroups? If so, what might account for these group differences?
  5. Again, compare comments/themes across courses and time. Are there consistencies? Trends related to characteristics of the course?
  6. Although qualitative comments often provide valuable information that clarifies the quantitative ratings as well as specific actionable suggestions for improvement, they are also problematic in that it is easier to focus on negative issues or isolated student experiences. Negative and sensational comments tend to carry more weight, even if they represent only a minority opinion, and so it is important to put and keep them in perspective. Moreover, it is important to focus on strengths as well as weaknesses. It may be useful to ask students, prior to their completing the evaluations, to “please provide comments that will be helpful to me in improving the course. Give me your ideas of what might work better.”

It may be useful to share both numeric ratings and narrative comments with a colleague or mentor who can provide advice and help achieve perspective, especially regarding scores that are low and comments that are negative. These might be faculty members in the same program or school or a program director or dean. The Center for Teaching Excellence can also provide consultation. These colleagues can help not only interpret the findings but also develop both short and long term strategies for improving as a teacher. Small changes can have big effects.

Response Rate

Although even a few returned course evaluations may offer valuable insights into student experience of a course, the higher the response rate, the more meaningful the findings. It is important that the sample be large enough to accurately reflect the viewpoints within the class. Recommended response rates vary by the size of the course enrollment. According to research cited by the Brown University Sheridan Center for Teaching and Learning, minimum recommended response rates depend on class size: 5-20 students, 80%; 21-30 students, 75%; 31-50 students, 66%; 51-100 students, 60%; and over 100 students, 50%.

Given the importance of a robust response rate, it is crucial for instructors to encourage students to respond to the survey.  One approach is to emphasize that the results are taken seriously and have led to specific changes in both the content and organization of a course and the instructor’s teaching approach (providing some examples, if possible).

Contextual Factors

As suggested above, in interpreting student feedback, it is important to recognize of the instructional context. According to the University of Washington Center for Teaching and Learning site:

  1. Student ratings and comments are affected by their reasons for enrolling in a course (e.g., required vs. elective) and their expected grade.
  2. Ratings can be biased by student perceptions of instructor identity, including the instructor’s race, ethnicity, gender, disability, age, and other characteristics, including those for whom English is an additional language.
  3. Piloting teaching innovations may initially produce more negative student comments and a decrease in ratings due to student resistance to changes in roles and expectations.
  4. Course evaluations completed online tend to have a lower response rate and may produce bimodal results, with high scores, low scores, and few in the middle.
  5. Response rates and response variability matter. Data from low response rates should not be considered representative of all students. In small classes, even with high response rates, averages of small samples can be more susceptible to “the luck of the draw” than averages of larger samples, producing more extreme evaluations than in larger classes. Students in small classes might also imagine their anonymity to be more tenuous, perhaps reducing their willingness to respond or to do so truthfully.

It is also important that student evaluations not be the only source of evaluative input on teaching. They provide only one type of information: students’ perceptions. Feedback should also come from peer observation and review of course materials as well as from self-reflection and familiarity with the scholarly literature on pedagogy. Furthermore, all instructors should develop their own philosophy of teaching and determine their teaching goals at each point in their career. Those insights will help determine which ratings and comments are most useful in self-assessment of both strengths and areas for growth.

Quick Tips

Specific changes and practices that can improve evaluations

According to the Vanderbilt University Center for Teaching, the most frequently mentioned areas for teaching improvement in student evaluations are clearer, more specific in-class communication and clearer, more explicit organization of course content. (At Alliant, frequently noted areas for improvement often include providing timely and helpful feedback.)

The Brown University Sheridan Center for Teaching and Learning made the following points about simple changes to courses that can improve evaluations by students.

  1. Responses to global evaluation questions are most strongly related to scores on questions that ask for feedback on clarity and organization.
  2. Changes that can improve students’ sense of clarity include:​​​​​​​
    1. ​​​​​​​writing key terms on the board
    2. giving multiple examples of a concept
    3. pointing out practical applications of a concept.
  3. Teaching behaviors that are associated with student perceptions of organization include:
    1. ​​​​​​​providing an outline or agenda for each class
    2. noting when you are transitioning from one topic to another
    3. noting sequence and connections or highlighting how each course topic fits into the class as a whole
    4. periodically stopping to review (or ask students to review) key course topics

The Cornell University Center for Teaching and Learning cited evidence that four factors significantly contributed to improvement of teaching as measured by student evaluations:

  1. Engaging in active and practical learning that emphasizes the relevance of course material to students
  2. Creating the opportunity for significant teacher/student interactions and conferences that allow instructors to connect with students
  3. Emphasizing learning outcomes and setting high expectations
  4. Making revisions and improvements to how student learning is assessed

Another way to improve teaching evaluations is to ask for and receive early feedback. The Alliant Center for Teaching Excellence site provides information on ways to collect, evaluate, and implement midterm feedback. ​​​​​​​

Sources

Interpreting Your Course Feedback
Brown University Sheridan Center for Teaching and Learning

Strategies for Better Course Evaluations
Iowa State University Center for Excellence in Teaching and Learning

Making Use of Student Evaluations
Georgetown University Teaching Commons

Student Evaluations of Teaching
Vanderbilt University Center for Teaching

End of Semester Evaluations
Harvard University Bok Center for Teaching and Learning

Using Student Evaluations to Assess Teaching Effectiveness
Indiana University – Purdue University Indianapolis Center for Teaching and Leaning

Student Evaluations
Cornell University Center for Teaching Innovation

Some Guidelines for Interpreting and Using Student Rating Forms
University of Denver Office of Teaching and Learning

A Guide to Best Practice for Evaluating Teaching
University of Washington Center for Teaching and Learning

TEP Statement on Student Evaluation of Teaching
University of Oregon Teaching Engagement Program