It starts with Greenberg's Question, followed by Ehrmann's Comments, then Hake's report on research, and more comments from Ehrmann, with Steve Gilbert's editorial comments at the end.
In the March/April 1997 issue of Educom Review, Larry Irving, an administrator of the NTIA is quoted as saying, "Study after study is beginning to demonstrate that students who use technology learn better and learn differently from kids who don't."
In the same issue of Educom Review, Thomas Russell, Director of Instructional Telecommunications at NC State writes, "Technology is not neutral, despite the fact that study after study has concluded that using it in the classroom neither improves nor diminishes instruction for the masses."
My questions is:
Which is it and where are all these studies being cited?
I have been in Instructional Technology for 20 years now and have heard a lot of talking like this but have not seen ANY serious research to address this question. Can anyone point me to it so I can become educated?
Thanks.
James B. Greenberg Assistant Director Computing Services Fitzelle 204, SUNY Oneonta, Oneonta New York 13820 phone (607) 436-2701 Internet: <GREENBJB@ONEONTA.EDU>Ignorance is curable, stupidity is forever.
I think the two questions, about "technology" and about paper, are comparable. If they are, it helps explain both quotes in Greenberg's question, since paper is both extremely helpful in learning and also doesn't "make" anyone learn anything, not by itself (e.g., is something written on the paper? If the writing on the paper is for teaching, how good is that content? If the student is doing the writing, under what circumstances is the writing being done? Or is the piece of paper being used by an art student for a sketch? etc., etc.).
For a little more elaborate review of the educational research bearing on those questions, you might take a look at my article, "Asking the Right Questions: What Research Tells Us about Technology and Higher Learning," that originally appeared in Change Magazine and is now available in the TLTR Workbook. It's also available on the Web at Learner Online
http://www.learner.org/content/ed/strat/eval/ACPBRightQuestion.html
Steve Ehrmann
P.S. I try to collect educational research on technology and higher learning that _does_ have something useful to say for faculty and students. If you see something really good, please pass it along and I'll try to share at least some such pieces here in AAHESGIT.
Stephen C. Ehrmann, Ph.D. 202-293-6440 Director, The Flashlight Project 202-293-0073(fax) c/o The American Association for Higher Education One Dupont Circle, Suite SEhrmann@aahe.org Washington, DC 20036-1110 http://www.aahe.org/TLTR.htm http://www.learner.org/content/ed/strat/
A survey (1, 2) of pre/post test data for 62 introductory physics courses enrolling a total of N = 6542 students in high-schools (14 courses, N = 1113), colleges (16 courses, N = 597), and universities (32 courses, N = 4832) throughout the country strongly suggests that "interactive-engagement methods" can increase mechanics-course effectiveness in promoting both conceptual understanding and problem-solving ability well beyond that achieved by "traditional" methods.
The data were sent to me in response to requests for test results using the well-known conceptual "Force Concept Inventory" (FCI) (3) and problem-solving "Mechanics Baseline" (MB) (4) exams of Hestenes et al. This mode of data solicitation tends to pre-select results which are biased in favor of outstanding courses which show relatively high gains on the FCI. When relatively low gains are achieved (as they often are) they are sometimes mentioned informally, but they are usually neither published nor communicated except by those who (a) wish to use the results from a "traditional" course at their institution as a baseline for their own data, or (b) possess unusual scientific objectivity and detachment. Fortunately, several in the latter category contributed data to the present survey for courses in which interactive engagement methods were used but relatively low gains were achieved. Some suggestions for increasing course effectiveness have been gleaned from those cases. (1, 2) As in any scientific investigation, bias in the detector can be put to good advantage if appropriate research objectives are established. We did not attempt to access the average effectiveness of introductory mechanics courses. Instead we sought to answer a question of considerable practical interest to physics teachers: Can the classroom use of IE methods increase the effectiveness of introductory mechanics courses well beyond that attained by traditional methods?
"Interactive Engagement" (IE) methods are defined as those designed at least in part to promote conceptual understanding through interactive engagement of students in heads-on (always) and hands-on (usually) activities which yield immediate feedback through discussion with peers and/or instructors, all as judged by their literature descriptions. "Traditional" (T) courses are defined as those reported by instructors to make little or no use of IE methods, relying primarily on passive-student lectures, recipe labs, and algorithmic-problem exams. The average normalized gain <g> for a course is defined as the ratio of the actual average gain <G> to the maximum possible average gain, i.e.,
<g> = %<G> / %<G>max = ( %<Sf> - %<Si>) / (100 - %<Si>), where <Sf> and <Si> are the final (post) and initial (pre) class averages.
The interactive engagement courses were, on average, more than twice as effective as traditional courses in promoting conceptual understanding since <<g>>IE = 2.1 <<g>>T. Here, the double carets "<<X>>NP" indicate an average of averages, i.e., an average of <X> over N courses of type P, and sd = standard deviation (not to be confused with random or systematic experimental error ). The difference <<g>>48IE - <<g>>14T = 0.25 is 1.8 standard deviations of <<g>>48IE and 6.2 standard deviations of <<g>>14T, reminiscent of that seen in comparing instruction delivered to students in large groups with one-on-one instruction. (5) As discussed in ref. 1, it is extremely unlikely that systematic error played a significant role in the large difference observed in the average normalized gains of the T and IE courses.
Among the survey courses of total enrollment N = 6542, the most widely used interactive-engagement methods in terms of students using the methods were Collaborative Peer Instruction, 4458 (all IE-course students); Microcomputer Based Laboratories, 2704; Concept Tests, 2479; Socratic Dialogue Inducing Labs, 1705; Overview Case Study and Active Learning Problem Sets, 1101; Modeling, 885; and research- based text or no text, 660.
As for uses of technology, Collaborative Peer Instruction in large "lecture" sections has been effectively accomplished with systems such as "Classtalk." Classtalk (6) provides hand-held computers for students, a master computer for the instructor, and a classroom network which allows immediate feedback from Concept Tests or a lecturer's questions. In microcomputer-based laboratories (10) students use an ultrasonic motion detector to observe the motion of any object - including their own bodies- and to display in real- time graphs of position, velocity, and acceleration. There is some use of computers in Socratic Dialogue Inducing Labs (11), Overview Case Study and Active Learning Problem Sets (12), and in Modeling (13).
The present survey suggests that interactive-engagement rather than technology per se may be the crucial factor for improving the effectiveness of introductory mechanics classes, but that technology can be very important when it supports such an approach. It would appear that this may also be the case for other undergraduate science, mathematics, engineering, and technology courses (SME&T) (14-16). The situation for non-SME&T courses may be similar.
REFERENCES
A commercial wireless classroom communication system is described by R.A. Burnstein and L.M. Lederman, "Report on progress in using a wireless keypad response system," "Proceedings of the 1996 International Conference on Undergraduate Physics Education" (College Park, MD, in press). Pedagogical advantages and utilization of Classroom Communication Systems (CCS) are discussed by R.J. Dufresne, W.J. Gerace, W.J. Leonard, J.P. Mestre, and L. Wenk, "Classtalk: A classroom communication system for active learning," J. Computing in Higher Ed. 7(2), 3-47 (1996). CCS may allow a cost-effective Socratic approach (see, e.g., ref. 8) to instruction in large-enrollment "lecture" sections.
In fact we do lots of educational research that is implicitly about the educational value of paper. And we see a similar variety of findings -- improves, hurts, no difference -- for the same reasons: different uses, different contexts. Anyone feel compelled to cut paper budgets while we settle this?
Similarly, depending on how modern technology is used, research shows different things: better, worse, different, no difference. For example, Tom Russell (quoted by Greenberg) has compiled a huge list of the 'no significant difference' studies in distance learning
http://tenb.mta.ca/phenom/phenom.html
but there are also huge numbers of positive studies on technology and learning. There are even some negative findings.
What's really important is to look hard for relevant research or evaluation on educational practices that parallel your own. If a certain technique works elsewhere, it _could_ work for you, too. If a certain type of failure happens elsewhere, it _might_ happen for you, too. Other people's research and evaluation can expand your imagination about what to try, why, and what errors to watch out for. I think the most important function of research and evaluation is to furnish what a mathematician might call "existence proofs" and an engineer might casually label as a "performance envelope": an increased sensitivity to the boundaries of the possible.
Corollary: there's no substitute for studying your own educational practices, technology-based or otherwise, because no one else's successes or failures will tell you for sure what's happening with your own students.
For example, there's a very nice little study by Jerald Schutte about a big jump in student achievement in a course using the Web at Cal State Northridge
http://www.csun.edu/sociology/virexp.htm
Recently someone said to me, "Schutte's study shows that asynchronous learning works!" As Nero Wolfe, the fictional detective used to say, "Pfui!" Schutte showed something very important but it wasn't that. He showed that his particular approach to instruction _can_ work. Big difference. To discover whether something similar does work for you, you need to look for yourself at your own practices. That's the purpose of our Flashlight evaluation tool kit project, to help you create such studies of your institution's educational uses of technology. For an introduction, see "The Flashlight Project: Spotting an Elephant in the Dark".
We'll start to offer site licenses for Flashlight this summer. Watch AAHESGIT.
(6/10/97 AAHESGIT #146. Approx. 100 Lines from Steve Ehrmann of AAHE <SEhrmann@aahe.org> and Jim Greenberg of SUNY Oneonta <greenbjb@snyoneva.cc.oneonta.edu> I asked Steve Ehrmann to provide some additional intro/comments to the posting from Greenberg. Both messages below raise important questions about and help clarify the availability of research on the educational impact of various uses of information technology. I believe that most leaders of educational institutions are now convinced of the need for educational uses of information technology to compete for students (and faculty!) and to support career/job/work preparation. However, many board members, presidents, et al. are still faced with major resource allocation decisions about the broader educational role of technology. These leaders could make major investment decisions more enthusiastically and comfortably if better information were available confirming the educational value of increasing the academic use of technology. Anecdotal reports from faculty and students about their strong preferences and personal beliefs in the educational effectiveness of various applications of information technology are rapidly accumulating, but good research data would still help wherever it could be applied. Please let Ehrmann and Greenberg know if you have any!) (6/24/97 AAHESGIT #152. Approx. 5 pages from Richard Hake of Indiana U. <hake@ix.netcom.com> and Steve Ehrmann of AAHE <sehrmann@aahe.org>. This posting begins with Hake's extremely thoughtful and well-documented discussion of the evaluation of "interactive engagement" methods for teaching intro. physics/mechanics. That is followed by a copy of an earlier posting by Jim Greenberg of SUNY questionning what research has to say about the educational effectiveness of information technology. This posting ends with an additional response to Greenberg by Ehrmann. Hake: ""Interactive Engagement" (IE) methods are defined as those designed at least in part to promote conceptual understanding through interactive engagement of students in heads-on (always) and hands-on (usually) activities which yield immediate feedback through discussion with peers and/or instructors, all as judged by their literature descriptions.... The interactive engagement courses were, on average, more than twice as effective as traditional courses in promoting conceptual understanding..." Ehrmann: "What's really important is to look hard for relevant research or evaluation on educational practices that parallel your own. If a certain technique works elsewhere, it _could_ work for you, too. If a certain type of failure happens elsewhere, it _might_ happen for you, too. Other people's research and evaluation can expand your imagination about what to try, why, and what errors to watch out for. I think the most important function of research and evaluation is to furnish what a mathematician might call "existence proofs" and an engineer might casually label as a "performance envelope": an increased sensitivity to the boundaries of") Steve Gilbert =============================================== This letter from Prof. Richard Hake is the best response I've gotten so far from AAHESGIT posting #146. It's followed by the original post from James Greenberg and some further notes from me about it. Steve Ehrmann Director, The Flashlight Project AAHE =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Information below last updated: 4/27/97 TLTR Summer Institute -- July 11-16, 1997 Phoenix, Arizona Steven W. Gilbert, Director, Technology Projects American Association for Higher Education 202/293-6440 X 54 FAX: 202/293-0073 GILBERT@CLARK.NET http://www.aahe.org [includes TLTR Web Site] SCHEDULE FOR 1997 TLTR WORKSHOPS AVAILABLE FROM AMANDA ANTICO 202 293 6440 EXT 38 ANTICO@CLARK.NET Order TLTR Workbook at Special AAHESGIT Reader Rate: Call 202/293-6440 x 11 and give code "SGIT 4/97" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - NOTE: Anyone can subscribe to the AAHESGIT Listserver by sending the EMail message (with subject line left blank): SUBSCRIBE AAHESGIT yourfirstname yourlastname to LISTPROC@LIST.CREN.NET If you would like to post a message to the AAHESGIT Listserv, send it to AAHESGIT@LIST.CREN.NET With over 6,000 subscribers, not all messages sent to AAHESGIT can be posted. Those that are selected for posting are reviewed and may be edited for clarity. Authors are often asked to expand or clarify their messages before distribution to the List. Facts, including URLs, are not checked or confirmed by me. Opinions expressed in AAHESGIT's postings do not necessarily reflect those of anyone officially affiliated with AAHE. I intend that each posting be protected by copyright as a compilation. As the copyright holder for the posting, I give permission ONLY for duplication and transmission of each compilation complete and intact including this paragraph. Duplication and/or transmission of any portion should be guided by "fair use" principles, and explicit permission should be obtained when needed. Permission to duplicate or transmit any portion written by a contributor must be obtained from that person. - Copyright 1997 Steven W. Gilbert