Tuesday, January 5, 2010

Cool Things @ UOW: Report: Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies

Evaluation of Evidence-Based Practices in 
Online Learning: A Meta-Analysis and 
Review of Online Learning Studies


2009

U.S. Department of Education
Office of Planning, Evaluation, and Policy Development
Policy and Program Studies Service

URL:
http://www.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf

Abstract
 
A systematic search of the research literature from 1996 through July 2008 identified more than
a thousand empirical studies of online learning. Analysts screened these studies to find those that
(a) contrasted an online to a face-to-face condition, (b) measured student learning outcomes, (c)
used a rigorous research design, and (d) provided adequate information to calculate an effect
size. As a result of this screening, 51 independent effects were identified that could be subjected
to meta-analysis. The meta-analysis found that, on average, students in online learning
conditions performed better than those receiving face-to-face instruction. The difference
between student outcomes for online and face-to-face classes—measured as the difference
between treatment and control means, divided by the pooled standard deviation—was larger in
those studies contrasting conditions that blended elements of online and face-to-face instruction
with conditions taught entirely face-to-face. Analysts noted that these blended conditions often
included additional learning time and instructional elements not received by students in control
conditions. This finding suggests that the positive effects associated with blended learning
should not be attributed to the media, per se. An unexpected finding was the small number of
rigorous published studies contrasting online and face-to-face learning conditions for K–12
students. In light of this small corpus, caution is required in generalizing to the K–12 population
because the results are derived for the most part from studies in other settings (e.g., medical
training, higher education).


Key Findings

The main finding from the literature review was that

• Few rigorous research studies of the effectiveness of online learning for K–12 students
have been published. A systematic search of the research literature from 1994 through
2006 found no experimental or controlled quasi-experimental studies comparing the
learning effects of online versus face-to-face instruction for K–12 students that provide
sufficient data to compute an effect size. A subsequent search that expanded the time
frame through July 2008 identified just five published studies meeting meta-analysis
criteria. 

The meta-analysis of 51 study effects, 44 of which were drawn from research with older learners,
found that2:

• Students who took all or part of their class online performed better, on average, than
those taking the same course through traditional face-to-face instruction. Learning
outcomes for students who engaged in online learning exceeded those of students
receiving face-to-face instruction, with an average effect size of +0.24 favoring online
conditions.3 The mean difference between online and face-to-face conditions across the
51 contrasts is statistically significant at the p < .01 level.4 Interpretations of this result,
however, should take into consideration the fact that online and face-to-face conditions
generally differed on multiple dimensions, including the amount of time that learners
spent on task. The advantages observed for online learning conditions therefore may be
the product of aspects of those treatment conditions other than the instructional delivery
medium per se.
 
• Instruction combining online and face-to-face elements had a larger advantage relative
to purely face-to-face instruction than did purely online instruction. The mean effect size
in studies comparing blended with face-to-face instruction was +0.35, p < .001. This
effect size is larger than that for studies comparing purely online and purely face-to-face
conditions, which had an average effect size of +0.14, p < .05. An important issue to keep
in mind in reviewing these findings is that many studies did not attempt to equate (a) all
the curriculum materials, (b) aspects of pedagogy and (c) learning time in the treatment
and control conditions. Indeed, some authors asserted that it would be impossible to have
done so. Hence, the observed advantage for online learning in general, and blended
learning conditions in particular, is not necessarily rooted in the media used per se and
may reflect differences in content, pedagogy and learning time. 
• Studies in which learners in the online condition spent more time on task than students in
the face-to-face condition found a greater benefit for online learning.5 The mean effect
size for studies with more time spent by online learners was +0.46 compared with +0.19
for studies in which the learners in the face-to-face condition spent as much time or more
on task (Q = 3.88, p < .05).6 

• Most of the variations in the way in which different studies implemented online learning
did not affect student learning outcomes significantly. Analysts examined 13 online
learning practices as potential sources of variation in the effectiveness of online learning
compared with face-to-face instruction. Of those variables, (a) the use of a blended rather
than a purely online approach and (b) the expansion of time on task for online learners
were the only statistically significant influences on effectiveness. The other 11 online
learning practice variables that were analyzed did not affect student learning
significantly. However, the relatively small number of studies contrasting learning
outcomes for online and face-to-face instruction that included information about any
specific aspect of implementation impeded efforts to identify online instructional
practices that affect learning outcomes.

• The effectiveness of online learning approaches appears quite broad across different
content and learner types. Online learning appeared to be an effective option for both
undergraduates (mean effect of +0.35, p < .001) and for graduate students and
professionals (+0.17, p < .05) in a wide range of academic and professional studies.
Though positive, the mean effect size is not significant for the seven contrasts involving
K–12 students, but the number of K–12 studies is too small to warrant much confidence
in the mean effect estimate for this learner group. Three of the K–12 studies had
significant effects favoring a blended learning condition, one had a significant negative
effect favoring face-to-face instruction, and three contrasts did not attain statistical
significance. The test for learner type as a moderator variable was nonsignificant. No
significant differences in effectiveness were found that related to the subject of
instruction.

• Effect sizes were larger for studies in which the online and face-to-face conditions varied
in terms of curriculum materials and aspects of instructional approach in addition to the
medium of instruction. Analysts examined the characteristics of the studies in the meta-
analysis to ascertain whether features of the studies' methodologies could account for
obtained effects. Six methodological variables were tested as potential moderators: (a)
sample size, (b) type of knowledge tested, (c) strength of study design, (d) unit of
assignment to condition, (e) instructor equivalence across conditions, and (f) equivalence
of curriculum and instructional approach across conditions. Only equivalence of
curriculum and instruction emerged as a significant moderator variable (Q = 5.40, p <
.05). Studies in which analysts judged the curriculum and instruction to be identical or
almost identical in online and face-to-face conditions had smaller effects than those
studies where the two conditions varied in terms of multiple aspects of instruction (+0.20
compared with +0.42, respectively). Instruction could differ in terms of the way activities
were organized (for example as group work in one condition and independent work in
another) or in the inclusion of instructional resources (such as a simulation or instructor
lectures) in one condition but not the other.


2
  The meta-analysis was run also with just the 44 studies with older learners. Results were very similar to those for
the meta-analysis including all 51 contrasts. Variations in findings when K-12 studies are removed are described
in footnotes.
3
  The + sign indicates that the outcome for the treatment condition was larger than that for the control condition. A
– sign before an effect estimate would indicate that students in the control condition had stronger outcomes than
those in the treatment condition. Cohen (1992) suggests that effect sizes of .20 can be considered "small," those of
approximately .50 "medium," and those of  .80 or greater "large."
4
  The p-value represents the likelihood that an effect of this size or larger will be found by chance if the two
populations under comparison do not differ. A p-value of less than .05 indicates that there is less than 1 chance in
20 that a difference of the observed size would be found for samples drawn from populations that do not differ.
 xiv

5
  This contrast falls just short of statistical significance (p < .06) when the five K-12 contrasts are removed from the
analysis.
6
  The QBetween statistic tests whether the variances for the two sets of effect sizes under comparison are statistically
different. 
 xv
--
For more cool links and ideas visit....
http://coolthingsuow.blogspot.com/
...if you want to be added or removed from this email list please contact wmeyers08@gmail.com

0 comments:

Post a Comment

For anonymous posts please leave your email :-)