[Air-L] F2F, Hybrid, Online Efficacy Research

Kevin Guidry krguidry at gmail.com
Thu Aug 20 21:40:25 PDT 2009


   As a mere PhD student (our "credentials" discussion is fresh in my
mind), I feel compelled to prominently proclaim that I am not speaking
on behalf of my employer or colleagues but offering my own viewpoint
and opinions.

On Thu, Aug 20, 2009 at 7:30 PM, Charlie Balch<charlie at balch.org> wrote:
> I'd appreciate some references regarding the comparative
> efficacy of F2F, Online, and hybrid instruction.

   We've looked at this a few times in my shop, Indiana University's
Center for Postsecondary Research.  We annually administer several
large scale surveys, most prominently the National Survey of Student
Engagement (NSSE), and occasionally we've focused on technology as it
relates to our measures of engagement and other proxy measures of
learning.  We've been conducting the NSSE for 10 years now and I'm
starting to compile all of our technology-related work. It seems that
every time we look at this issue there is a significant positive
correlation between technology use and nearly every thing we measure.
   Our most recent work, which I presented a few months ago at the
meeting of the American Education Research Association, focused
specifically on how the relative number of online courses relates to
the things we measure.  Even when we controlled for a bunch of things
(age, gender, enrollment status, major, institution type, etc.) the
same generally positive correlations remained: increased use of
technology positively correlated with measures of engagement and
learning.  Our paper can be found at
http://cpr.iub.edu/uploads/Engaging%20Online%20Learners.pdf if anyone
is interested in digging into this more.
   But "technology is good!" isn't a very useful or nuanced finding,
right?  We're continuing to dig in to this more; we have another large
set of data from this year's survey we're working to analyze.  We
asked more specific and different questions this time around so we
should learn some new things.  But we're also pretty limited by our
methods and resources.  We can make some really good generalizations
with really impressive numbers of respondents but we can't ever answer
the "why" and "how" questions.

> I appreciate that many factors are involved and instructional efficacy is
> not a well defined construct.

   The things in which we're all really interested are very subtle and
hard to define much less measure.  Many, many things are conflated and
confused.  And it's difficult and often irresponsible to generalize
findings.


Kevin



More information about the Air-L mailing list