Interpreting IASystemTM Course Summary Reports
(for reports from Winter 2016 and earlier)

Your Course Summary Report has some or all of the following sections:

A. Header: Includes information to identify the course, instructor and semester.  The overall response rate for the survey is also displayed.  The information shown regarding "Course type" and instructor rank is collected by IASystem for quality control purposes.  That information, whether accurate or inaccurate, should have no bearing on GVSU interpretations of LIFT results.

B. "Summative" Information:  This section includes items that IASystem developers intend for use in summative evaluations of courses and instructors.  GVSU allows discretion for academic units regarding which LIFT response information is used for summative evaluation, so the "summative" and "formative" labels in the course reports may not be accurate for your academic unit.

  • Overall Summative Rating - This is a combined median of the four universal items that are included in every LIFT questionnaire.  This is the measure that IASystem recommends for use in circumstances where a single score is desired for summative evaluation purposes.  Please see GVSU's principles for USING LIFT results for summative evaluation for guidelines about such practices.
  • Challenge and Engagement Index (CEI): Several IASystem items ask students how academically challenging they found the course to be.  IASystem calculates the average of these items and reports them as a single index.  In general, the Challenge and Engagement Index correlates modestly with the "Overall Summative Rating".  An analysis of how the scales relate among GVSU classes has not yet been performed.
  • "Summative Items" - Again, the "summative" label is applied by IASystem, not GVSU.  These are the four universal items on LIFT questionnaires (although some minor variations in wording exist among the forms).  Combined, they are the basis for the "Overall Summative Rating" score.  For each item, the report shows a frequency distribution and a refined median score.

C. "Student Engagement" Information:  Most LIFT questionnaires include several questions about students' motivations, expectations, and effort related to the course.  This data contributes to the Challenge and Engagement Index, and over time will contribute to research on how these aspects of engagement relate to students' ratings of instruction.  For each item, the report shows a frequency distribution and a refined median score.  For sections that are part of multi-section courses (e.g. student takes a lecture and a lab, with separate LIFT surveys for each), it remains to be seen how students interpret these questions and what value, if any, the responses have.

D. "Standard Formative Items": Again, "formative" is a label supplied by IASystem, and it does not preclude use of these items in summative evaluations at the discretion of an academic unit.  The items in this section vary by survey form.  Each item appears in some questionnaires but not in others.  The responses to these items should provide more detail and better diagnostic information for formative purposes than the broader universal LIFT items.   For each item, the report shows a frequency distribution and a refined median score.

E. Instructor-Added Items (optional; additional questions can be added prior to administration of the survey):  Frequency Distribution and median for any ordinal-type questions.

F. Standard Open-Ended Questions (optional; can be excluded from reports at time of report generation):  Verbatim text of students' responses to the four universal LIFT free-response items.

G. Instructor-Added Open-Ended Questions (optional; additional questions can be added prior to administration of the survey and can be excluded from reports at time of report generation): Verbatim text of students' responses to additional free-response items.


Notes on Frequency Distributions and Medians:

For each ordinal item, the report displays:

  • The number of responses
  • The percent of respondents who gave each response level (This percentage is based on the number who responded to the item, not the number who submitted a questionnaire.)
  • The median response - The median is the point on the rating scale at which half of the responses were higher and half were lower. 

Medians are interpolated using the method described in the IASystem Interpreting Reports guide.  To interpret median ratings, compare the value to the response scale.  For example, if the response scale is {Very Poor (0), Poor (1), Fair (2), Good (3), Very Good (4), Excellent (5)} and the median is listed as 4.3, then the score corresponds to a value of "Very Good" (i.e. 4.3 rounds to 4, which corresponds to "Very Good").  The decimal portion of the median relates to whether there were more responses below "Very Good" or above.  If the listed median is above its rounded value (as 4.3 is), then it indicates that more responses were above the median ordinal value than below it -- in this case, "Excellent" outnumbered {"Good" and below}.  Conversely, a listed value that is below its rounded value would indicate that there were more responses below the median ordinal value than above it.


Sample Size, Response Rate, and Student Confidentiality

The number of respondents to any particular LIFT survey matters for three separate reasons:

  • If the number of students in the course is small, then the confidentiality of individual student respondents may be compromised, and respondents might be less forthright in their feedback.  This is the primary reason LIFT surveys are optional for course sections with fewer than 10 students.
  • If the number of responses is small, then the statistical properties of the results are less desirable.  Specifically, the precision and reliability of the results are weaker when the number of responses is low.  Reports based on fewer than 20 responses should be interpreted with particular care. 
  • If the percentage of invited students who fill out LIFT questionnaires is low, it increases concern about the effect of response bias on the findings.  All voluntary surveys with less than 100% participation are subject to response bias, since the factors that influence students' decision to respond may also be related to the responses they would provide.  As the rate approaches 100% the potential impact of that bias shrinks.  GVSU has set a goal of at least 70% participation for all LIFT Surveys.

If the student response to your LIFT survey falls short in some or all of these areas, it doesn't mean that you can't learn from the results, but you should take particular care in interpreting them.  All LIFT results should be viewed in conjunction with other evidence (other LIFT results and other types of observations of teaching effectiveness).



Page last modified May 9, 2016