FAS statement on Department of Education Report
Author: Henry Kelly
The Department of Education report Effectiveness of Reading and Mathematics Software Products: Findings from the First Student Cohort on classroom software rang wakeup call to anyone pondering technology in education. Its authors conclude that math and reading software produce no better test results than conventional teaching methods.
How can a technology that is transforming the way we acquire information throughout the economy--revolutionizing businesses from manufacturing to banking— fail to benefit education? How can technology revolutionizing training in the Department of Defense fizzle in elementary schools?
The report denigrates extant technology, underscoring the nation’s woeful underinvestment in new technology fostering new learning experiences. Despite its huge public investment in school hardware, the United States has expected private investors to undertake the expensive, demanding research, development and testing needed to use that hardware effectively.
Ten years ago an inside joke for economists was that information technology is showing up everywhere in the economy except in the productivity statistics. No one is laughing now because productivity is heavily dependent on IT advances. It took years to figure out how best to use the technology that enabled major reinvention of business practices. Given the difficulty of marketing innovations to school bureaucracies already overwhelmed by increasing demands and fixed resources, education hasn’t been up to such an effort. It’s painful to see that the federal government would invest $10 million in reviewing commercial software when it has made such a small investment in helping developers design and test software for schools.
Where the report helps:
Investment in research and testing by educational software producers runs significantly behind that lavished on other commercial software. Satisfied customers like the Pentagon acknowledge that it’s expensive to develop excellent products, which succeed only when they’ve gone through extensive testing and refinement. They usually work only when the basic strategies of instruction change to reflect the unique capabilities of the software–such as letting students proceed at their own pace. Few commercial developers want to gamble the time and money on R&D, given the complexities and high risks of developing and marketing educational software.
Where the report can be misread:
The report is not evidence that instructional technology cannot be a powerful learning tool. It indicates that results on standardized tests are not significantly improved by systems found in a sample set of schools. Fully 86-92 percent of the teachers in the program found the systems sufficiently useful to keep using them. Clearly they perceive a value not registering on the tests.
The study focuses on whether the technology was better than traditional teaching methods but failed to consider it as a productivity tool. The study admits that it produced results no worse than traditional teaching, and that teachers used the software to help individual students. It offers promise of teaching more students with the same number of teachers, without degradation of educational quality. The study backhandedly indicates that this technology has productivity potential.
The study’s experimental methods suffer from some unavoidable but fundamental problems.
- Most older systems studied do not employ state-of-the-art software.
- Vital features of technology-based instruction could not be tested. For example, good software allows each student to proceed at his or her own pace–something impossible in standard classrooms. The best also integrates continuous testing, showing students whether they are advancing toward a goal. A final test should bring no surprises. Most of the software was not designed to produce a result on a specific test. Naturally, an instructor focusing narrowly on specific tests would produce better results.
- The study notes that its results are “based on schools and teachers who were not using the products in the previous school year”--teachers using systems for the first time. Fifty percent of the teachers later indicated that, once they began to use the software, they recognized the need for more support and training. They weren’t fully fluent in the material.
- Many teachers used the material for “supplementary” or “enrichment” purposes: enjoyable perhaps for their students, but not necessarily rewarding on standardized test scores. In many tests, an average of one computer for each three students complicated instructors’ tasks. Their use sessions averaged only 10-15 percent of class time.
- To protect individual vendors, the report jumbles various product results, likely diluting the impact of excellent products.
- The tests involved large numbers of students and teachers but, since they covered four grade levels with several software packages apiece, their sample sizes (250-500 students) were skimpy for any given study. Superior packages, like the Carnegie Cognitive Tutor Algebra program, involved similar-sized cohorts but reached happier conclusions.
- Well-designed educational software has been shown to motivate learners to increase time-on-task. But like the earlier studies on the role of IT in business productivity, the positive effects of individual products is lost in the averages.
Instructional software may be best at teaching skills ill measured by standardized exams. The Carnegie Cognitive Tutor Algebra, for example, showed a small impact on SAT scores, but more than tripled performance on “problem solving” tests. Testing sophisticated skills is another untapped potential of instructional software.
The bottom line
The gap between the potential of educational technology and the products now in the market is huge. It’s well documented that schools are a tough market for entrepreneurs, and entrepreneurs readily underestimate the costs and travails of developing products that really work.
The new report underscores the need for public funding of research and testing to determine what works. We need to teach an increasingly diverse population an increasingly sophisticated set of skills without blowing the budget. Technology provides a critical resource for meeting this challenge. There’s no doubt that kids expect to learn from technology–they revel in it outside of school. There is a compelling need for more federal research in designing effective instructional software, and in testing innovations to see what works and what doesn’t.
Through its Learning Federation initiative, the Federation of American Scientists (FAS) is exploring the uses of advanced video-gaming technologies to create vivid educational experiences for learners from first graders to fire chiefs. Surely the nation can use for its schools the serious technology already effectively training its doctors, its pilots, and its emergency responders.