Blended learning techniques, which integrate in-person and online course components in a single course, are becoming more popular in health sciences education. The evidence supporting blended learning’s relative efficacy over more conventional course formats is equivocal.
A quasi-experimental, non-equivalent control group design was used to investigate the influence of a blended learning method on student learning in a graduate-level public health course. Exam scores and course point total data from a “conventional” approach semester (n = 28) were compared to data from a blended learning approach semester (n = 38). In addition, the blended learning strategy was reviewed by students.
Even after controlling for past academic success, the blended learning strategy resulted in a statistically significant boost in student performance (final course point total d = 0.57; a medium effect size). Furthermore, student views of the mixed method were quite favorable, with the majority of students (83%) favoring the blended learning approach.
Blended learning techniques have the potential to improve student learning and performance in health sciences courses.
Over the last 15 years, a growing number of health sciences courses, as well as those at other colleges and universities, have included online course components. These vary from totally online courses to face-to-face courses with limited online components. Courses that use a blended learning approach, in which certain course parts are provided in a conventional classroom environment while others are offered online, are of special importance. Blended learning is combining online and face-to-face course components, with the idea that the pieces will function together as a unified, integrated course [2, 3]. These design decisions are sometimes influenced by economic, logistical, or other planning considerations, while other times they are influenced by the relative strengths and weaknesses of various modalities for presenting course information [2, 3].
Although the reasons for providing blended learning experiences may differ greatly across colleges and universities, a critical question from a teaching and learning standpoint is whether such designs are effective at delivering course content and, given the shift away from more strongly classroom-based delivery formats, whether blended learning approaches differ from more traditional classroom delivery formats in terms of the learning outcomes students achieve as a result of them. Furthermore, it is critical to investigate how students perceive the blended learning course and their feedback on its effectiveness.
We investigate these questions in the context of a master’s-level public health survey course in this paper. Learning outcomes for a blended learning course delivery were compared to those for a more traditional, classroom-based format in a quasi-experimental, non-equivalent control group study.
While there is a substantial body of research on the effectiveness of fully online course delivery, few studies have looked at the blended learning approach. This is especially true for graduate health sciences courses, as much of the research has concentrated on undergraduate education. Rhetorical arguments for blended learning have focused on the fact that different learning tasks are naturally suited to specific delivery modalities, with modal blending allowing for a “match” between the learning task and delivery mode. Furthermore, it has been argued that moving didactic, lecture presentation online “frees up” in-person class time, allowing for greater engagement in active learning.
While there are compelling pedagogical arguments in favor of blended learning, the empirical literature on the relative effectiveness of blended learning versus traditional learning approaches is mixed. While some studies conclude that a blended learning approach is more effective, many others conclude that there are no differences in outcomes between the two modes of delivery [6-9]. According to some research, the relative efficacy of different modes of delivery may vary depending on the level of learning outcomes, with online and in-person delivery methods being equivalent for lower-level skills but preferable for higher-level skills.
Given the mixed findings on the relative merits of blended learning versus traditional formats for student learning, this paper investigates whether and how the shift to blended learning affects student outcomes while keeping the course content and learning objectives constant. On the one hand, the increased in-person, active learning time provided by the reduction in class time due to the online delivery of lecture components may argue for increased learning in the blended format. According to the results, transferring lecture content from in-person to online possibly reduces active participation in learning o certain course components, advocating for more learning in the conventional course format.
The impacts of shifting from a more “traditional” classroom model to a mixed classroom model on student learning outcomes are investigated using a quasi-experimental, non-equivalent control group design in this research. Three measures were used to assess the consequences of this change in course delivery: 1) test performance; 2) overall course performance; and 3) student course assessment ratings and open-ended comments.
The University at Buffalo Social and Behavioral Sciences Institutional Review Board evaluated the study described here (protocol 426637–1).
Participants included 66 graduate students enrolled in one of two semesters of a master’s-level course on the social and behavioral sciences in public health (38 in the blended learning semester and 28 in the “traditional” comparison semester). Of the 66, 54 were enrolled in the university’s Masters of Public Health program (for which the course is required), 5 were enrolled in a Preventive Medicine residency (also required), and 7 were enrolled in another university graduate program (for which the course is not required).
Description of the course
The course is a master’s level survey of the role of social and behavioral sciences in public health. All students enrolled in the university’s Masters of Public Health program, as well as medical school graduates completing the university’s residency program in preventive medicine, must take this course as a core course. Furthermore, the course attracts a small number of Ph.D. students from a variety of fields such as nursing, social work, communications, and psychology.
The course was taught in a fairly traditional format during the “baseline” semester (Fall 2011). Each week, students completed out-of-class reading assignments (typically 2-4 journal articles or book chapters), but all non-reading course content was presented in class via instructor lecturing. Active learning activities such as small group work and class discussions were also included during class time. Around 60% of class time was lecture-based, with the remaining 40% devoted to active learning.
During the “blended learning” semester (Fall 2012), all didactic content presentations were pre-recorded and posted online for student viewing before the week’s in-class sessions. Active learning approaches were then used almost entirely in class (at least 80% of the time). In-class lecturing occurred only when it was necessary to clear up points of student confusion or when integrating a lecture presentation with an active learning activity was required for the activity’s successful implementation.
Notably, the learning objectives and course content remained consistent across the two semesters. Course readings were nearly identical; the only difference between the baseline and blended semester readings was when the information presented in the reading and information presented in a recorded lecture overlapped. The key differences between baseline and blended were: a) the presentation of didactic lecture components online rather than in class; and b) the freeing up of in-class time for more in-depth, active learning engagement with the course concepts due to the shift to online lecture presentation.
Components of evaluation
The course was divided into three units, each lasting approximately 3 1/2 weeks. Students took a non-cumulative unit exam after each unit. Each unit exam had 10 multiple-choice questions and 4 short answer questions. Exams were kept consistent across the two semesters on purpose to allow for performance comparisons. The only difference was that 1-2 multiple choice questions on Exams 1 and 2 were changed to correct problematic items.
Total number of course points
The overall course point total was based on performance on writing assignments, a capstone end-of-semester project, and participation in in-class and out-of-class activities, in addition to exams.
Ratings from students and open-ended comments
Students anonymously completed a standardized, school-wide course evaluation at the end of the semester. The evaluation included closed-ended ratings of both the course’s and the instructor’s quality. Students responded on a 5-point scale for both of these ratings, with 1 being unacceptable and 5 being one of the best. Students were then asked two open-ended questions: “Please comment on course elements you found particularly effective” and “Please comment on course improvements you would suggest.” A supplemental evaluation question was also added, asking students, “Given the choice, would you prefer to take the course in the blended format we used this semester or in a more “traditional” lecture in class format?”
Four students were removed from the dataset before analysis. Two students (one from each semester) were dropped from the class shortly after the first exam. Furthermore, two students took the course in the baseline semester and then re-took it in the blended learning semester after failing to receive a B or higher in the course (required for graduate credit). Because it was their second time taking the course, these students were removed from the blended learning semester data. Due to academic dishonesty on a course project, one of two students failed the baseline semester course. Exam scores for that student were included in the dataset, but the final course point total was not (given that the course total reflected the grading consequences of academic dishonesty rather than course performance).
Because undergraduate GPA was not available for all students, we examined whether students who had GPA data differed from those who did not. There were no differences in any of the course grading components based on GPA availability. Furthermore, no GPA availability x semester interactions were found on any outcome variable; all F-tests were ns.
The comparison of exam performance and course point totals across the two semesters are the key test for the relative efficacy of the blended learning approach. Given the non-equivalent control group design, differences in student characteristics across the two semesters are a primary threat to the validity of this test. As a result, before conducting the analyses, we looked at the student’s prior academic performance (as measured by their undergraduate GPA), a program of study, and gender. There were no differences between the semesters (see Participants above). Although the lack of differences increases confidence in the approach, we conducted all analyses while controlling for undergraduate GPA to further address the possibility of lack of comparability. The exam analyses were carried out using repeated measures ANCOVA, with the three exam scores serving as a repeated measures outcome variable, the semester serving as a categorical predictor variable, and undergraduate GPA serving as a continuous covariate. Similarly, the final course point total was modeled using univariable ANCOVA, with the final course point total as the continuous outcome variable and the above predictors and covariates as predictors and covariates modeled.
Importantly, all outcomes analyses were conducted in three ways: 1) without controlling for undergraduate GPA, 2) controlling for undergraduate GPA and including only those students in the analysis who had GPA, and 3) using mean score substitution (by semester) to estimate GPA for those students who did not have it and then controlling for undergraduate GPA for all students. Across all three methods, the pattern of mean differences (i.e., which semester had a higher or lower score) was the same. Given this, means and standard deviations are unadjusted for GPA in all data reported here (so that descriptive data is based on the entire dataset); reported significance tests to reflect differences controlling for GPA for students for whom a specific undergraduate GPA was available.
Finally, to examine student feedback on the blended learning approach, the authored content analyzed open-ended course evaluation feedback from the blended learning semester. After an initial read-through of the feedback, responses were coded into a series of feedback categories (see Table 1) that were refined during the coding process. The coding was not exclusive (i.e., a particular student comment could be coded as belonging to more than one outcome category). Furthermore, using linear regression with evaluation score as a continuous outcome measure and semester as a categorical predictor variable, scores on two closed-ended evaluation items were compared across semesters. Because student evaluations are anonymous and separate from other course components, the covariates could not be included in this analysis.