Making Sense of Research: What's Good, What's Not, and How to Tell the Difference

Books

Elaine K. McEwan & Patrick J. McEwan

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • Dedication

    To the memory of Elaine's husband and Patrick's father, Richard T. McEwan (1929–1990).

    Copyright

    View Copyright Page

    Foreword

    Educational decision makers are constantly confronted with challenges for improving educational outcomes. They are bombarded with stories in journals and newsletters that herald new educational breakthroughs in curriculum, instructional strategies, teacher qualifications, and technology that have been “shown” to raise student test scores. They are assailed with the recommendations of consultants and experts who begin their sentences with the authority of three words: “Research shows that….” But, they also know that these words are used very loosely in education, often to promote approaches to which the advocate is committed. Only rarely is the researcher required to provide documentation that substantiates the research findings. Even worse, what is often called research in education would not pass muster as rigorous evidence in any of the sciences or social sciences. Educational experimentation is difficult to undertake, and educational researchers often lack the specialized training that is required for systematic inquiry.

    What is an educational decision maker to do under such circumstances? Can research claims be trusted? How can one understand the differences between solid research findings and mere claims that “Research shows that….” Recently, the stakes have risen. The No Child Left Behind Act of 2001 mandates that decisions using federal funding must be made on the basis of “scientifically based research.” Presumably, state and local decision makers using federal funds must limit the choice of educational interventions for improving instruction to those that are scientifically validated. Yet, most policy makers and decision makers are not likely to be able to distinguish among interventions according to this criterion. Advocates of educational interventions have historically placed findings of questionable validity into scientific-appearing formats such as graphs and histograms, assertions of “significant” results, and journal citations. All of these are designed to confer the manifestations of “scientificism” on claims of educational effectiveness. Often these devices are used for marketing purposes. Unfortunately, the vast majority of research claims in education have been found to be suspect in terms of the validity of the evidence. Only a few are solidly supported by systematic research, but which few? Making Sense of Research by Elaine and Patrick McEwan is the most effective attempt that I have seen to assist decision makers in sifting through scientific claims about educational interventions. The book takes readers through a comprehensive set of principles regarding the evaluation of research claims and applies these to a set of case studies of prominent reforms. The collaboration of a noted educator and a highly regarded economist provides insights and understanding into both the strengths and weaknesses of education research. These insights are applied directly to prominent educational reforms and the analysis of the quality of research underlying them. Educators owe a debt of gratitude to the McEwans for their clarity in presentation and penetrating guidance. Without question, this book will serve as the standard work in assisting educators and decision makers to assess the validity of research claims in education as they determine how to improve student outcomes.

    Henry M.LevinWilliam Heard Kilpatrick Professor of Economics and Education, Teachers College, Columbia University and David Jacks Professor of Higher Education and Economics, Emeritus, Stanford University

    Preface

    The dictionary defines research as “careful, patient, systematic, diligent inquiry or examination in some field of knowledge, undertaken to establish facts or principles” (McKechnie, 1983, pp. 1538–1539). When Webster's definition of research is paired with the word education the results seem almost oxymoronic. Educators often make decisions at the last minute, under extreme pressure, and with more regard for what is popular or innovative, than for what is established, factual, or truthful. For many educators, any consideration of the rigors of research ended when they sold their statistics books to the used bookstore—grateful that they would never again have to “crunch numbers” or decipher data. Unfortunately, they regarded research as an obstacle to surmount on the way to an advanced degree, not an essential aspect of informing their day-today practice. Research was for the ivory tower types—not for “down-in-the trenches” practitioners.

    We can no longer afford this shortsighted approach. One word has changed the educational landscape—accountability. Now that educators are increasingly accountable for the outcomes of their efforts, the decisions they make regarding instruction and curriculum have taken on more gravity. Good intentions and hard work no longer count. Politicians, parents, and the press want results and research can help us get them. Make no mistake; research cannot provide recipes or prescriptions. Schooling is far too idiosyncratic to respond to simplistic formulas. However, when carefully read and thoughtfully considered, quality research can inform, enlighten, and provide direction to practitioners that will save us time and money, but more importantly enhance the effectiveness of our schools and increase the opportunities for our students.

    We, the authors of Making Sense of Research, are related to one another, as you may have already surmised. We are a mother-son team—an unusual writing combination to be sure. But we think you will find that our unique blend of training and experiences provides a helpful perspective as you attempt to make sense of education research. If you have not already done so, take a moment to skim our biographies in About the Authors. We bring the best of two worlds to the writing of this book—the “real” world where education is practiced daily and the “research” world where the “disciplined search for knowledge” (Smith & Glass, 1987, p. 6) is ongoing. Ideally, these two worlds would be well-connected with the key findings regarding educational policies and practices making their way straight from the pages of refereed journals to the in-baskets of education's principal stakeholders and decision makers. However, as one writer pointed out, “The [education] research-to-practice pipeline has sprung many leaks” (Miller, 1999, p. A17). We aim to put our collective fingers in those leaks.

    In our experience, the relationship between education researchers and practitioners is a tenuous and occasionally nonexistent one. Researchers are often guilty of shutting down practitioners with esoteric arguments, while practitioners are no less guilty of putting up their own smoke screens to defend their distrust of data. “When was the last time you were in a classroom? You ought to get out into the real world,” they assert. The authors have certainly been guilty of engaging in these arguments from time to time—even with each other. In a way, our authorship is a metaphor for the larger questions: How can education research improve practice? How can educators make sense of the complexities and even incongruities of research? Conversely, how can researchers tap into the rich knowledge base of educators regarding the implementation of research in authentic settings?

    The Goals of This Book

    Our mission is set forth in the book's title: to equip our readers with the conceptual understandings they need to make sense of education research. This is not merely an academic exercise, however. Our ultimate goal is that you would use research findings to inform decision making and practice in your classroom, school, or district. Making sense of research is not just about reading and understanding research done by others. In our opinion, making sense of research is also about doing your own site-specific, user-driven research as a way of sustaining school improvement, keeping the vision alive, and attaining your mission.

    If your eyes glaze over whenever you read a research study because you lack the tools to comprehend what it means, then you are being held hostage to someone else's interpretation of the findings. If you read only the introduction and the conclusion to a research study, ignoring everything in between, you may as well not read it at all. By the time you finish reading Making Sense of Research, you will be able to judge the quality of research for yourself and have confidence in your judgment.

    Who This Book Is for

    We have written this book for educators and educational stakeholders who want to be informed and active participants in discussions regarding curriculum, instruction, and policy. This book is for those who make decisions—from the seemingly smallest teacher-made decision regarding time allocation during reading instruction to major statewide policy decisions such as reducing class size. The following individuals will find Making Sense of Research to be helpful:

    • Concerned and conscientious teachers who recognize that their efforts are not bringing about the desired results and want to be more effective in their classrooms
    • Principals who feel increasing pressures to bring students to mastery of national, state, or local achievement standards and are frustrated by the often haphazard way in which program decisions are made
    • Teams of teachers and administrators who are charting school improvement initiatives and need the tools to make quality decisions
    • District administrators who are faced with large-scale budgetary and curricular decisions and need direction in the allocation of resources.
    • University professors who want their students to become well-informed and knowledgeable consumers of education research.
    • Educational consumers and policymakers such as parents, school board members, or legislators who want to base their decisions on sound research.
    What This Book Is Not

    Making Sense of Research is different from other books you may have purchased or read about education research in the past. It is not a book that explains how to do research. Neither does this book methodically summarize research findings regarding a laundry list of educational innovations, instructional practices, or well-known reforms so that you won't have to do any thinking for yourself. You have listened to the “experts” for too long. It is time to do your own thinking. While we do provide a variety of interesting examples, illustrations, and case studies from current research findings, we do not provide recommendations about the best practices or the provenmethods. Books of this nature become outdated very quickly with the appearance of newer programs and additional research.

    Actually, one of the most important lessons we can learn from research is that all-purpose solutions do not exist. What may work for one set of students and teachers in a particular setting may not be as applicable or effective in another classroom or school. It is our belief that educators are intelligent enough to read the research for themselves, evaluate its trustworthiness as well as applicability in their own setting, and then make informed decisions. Our goal is to empower you to be confident and accountable regarding your instructional, curricular, and policy decisions in a variety of settings and job roles.

    Overview of the Contents

    Chapter 1 introduces five broad questions that should change the way you read and think about research:

    • The causal question: Does it work?
    • The process question: How does it work?
    • The cost question: Is it worthwhile?
    • The usability question: Will it work for me?
    • The evaluation question: Is it working for me?

    To better illustrate the first four questions, we will discuss research findings in four controversial areas: class size reduction, reading instruction, private-school vouchers, and whole-school reform. While of interest in their own right, the cases will provide a platform for illustrating the concepts and tools of education research. The fifth and final question will be discussed in the context of doing site-specific, user-driven research. Before we tackle the questions, however, Chapter 2 provides a quick tour of the world of education research—a behind-the-scenes look, if you will, at how the research “industry” works.

    Chapters 3 and 4 explain how to find answers to the first big question: Does it work? This question has preoccupied researchers in every branch of the social sciences for many years. Their ingenuity has led to the development of numerous methods to determine whether there are causal effects of a specific treatment or policy on the students, teachers, schools, or districts (or even states and countries) in which it was tested and observed. The best of these methods are referred to as experimental, and they are described in Chapter 3. Other methods, loosely referred to as quasi- or non-experimental, are examined in Chapter 4.

    Chapter 5 looks at the second big question: How does it work? To answer this question, which concerns process rather than outcomes, we will turn our attention to qualitative data. Whereas there is no doubt that quantitative data (e.g., achievement test scores) are an essential aspect of results-based decision making, we will demonstrate that education research is not an either-or endeavor. We must also value, integrate, and use information that is collected through observations, interviews, and the analyses of documents (e.g., memos, letters, vision statements, student work samples, or teachers' journals).

    Chapter 6 takes on the fourth question: Is it worthwhile? Though frequently ignored in debates about education research, the costs of research-based decisions must be considered. Do the costs of a program or policy render it infeasible? Is there another, less costly means of accomplishing the same goals?

    In Chapter 7, we examine the fourth big question: Will it work for me? Just because a method or program is shown to have an effect on a group of students or schools in an experimental study, that is no guarantee that it will work for you. We explore how you can determine the generalizability of research to your unique setting using a series of rules of thumb.

    In Chapter 8, we consider the last, but by no means the least, question: Is it working for me? Although published research is an essential guidepost for practitioners, it cannot provide all of the answers once you have made a decision to act upon those research finding. The methods that characterize good research by academics are also powerful tools for investigating the causal effects of locally developed curricula or evaluating purchased programs or reform models. User-driven research can aid in answering the question: Is it working for me?

    A Few Words of Explanation

    As authors, we debated the necessity of including chapters that define and explain (yet again) research and statistics. We have chosen to steer clear of this “kitchen sink” approach to writing about research methods. Our suspicion is that these chapters are rarely read with the attention they deserve. When they are read, the forest is completely lost for the trees by beleaguered students who scramble to memorize obscure formulas. The message of this book is not that statistics and other research methods are unimportant—quite the contrary. They are exceedingly important, but they are nothing more than tools to be marshaled in answering the five critical questions. If the questions are poorly understood or imprecisely formulated, then understanding a set of methodological tools (or not) is of little consequence. As a compromise, however, we include intuitive discussions of techniques used by education researchers—or tools that we wish were used by education researchers. These are complemented by short sections at the end of every chapter with suggestions for additional reading.

    As you read, you will note the use of the term education research rather than the more commonly used educational research. Although the majority of authors who have written on this subject use the designation educational research in their book titles and texts, we prefer the example of Lagemann (2000) and Lagemann and Shulman (1999). We must further acknowledge our tremendous intellectual debt to the methodological writings of Shadish, Cook, and Campbell (2002). The newest edition of the Cook and Campbell (1979) classic has alternately inspired, encouraged, and challenged us with its crisp writing, coherent explanations, and thought-provoking questions. The reader who is intrigued by our discussion would do well to consult these volumes.

    Acknowledgments

    Winston Churchill captured the essence of what it means to write a book when he said, “Writing a book is an adventure. To begin with, it is a toy and an amusement; then it becomes a mistress, and then it becomes a master, and then a tyrant. The last phase is that just as you are about to be reconciled to your servitude, you kill the monster, and fling him out to the public” (Gilbert, 1991, p. 887). Before we fling our joint effort out to the reading public, we must acknowledge the contributions of others to our efforts.

    I (EKM) am grateful to Patrick McEwan for writing a book with his mother. It isn't every son who would have the patience with and faith in his mother to undertake a project such as this. I am proud of his scholarship and commitment to excellence. I also owe an enormous debt of gratitude to James Heald, my academic mentor and long-time friend. Without his confidence in me, aided by a positive recommendation to an editor friend, my educational writing career might never have been launched. Most especially, I offer my heartfelt thanks to my husband and business partner, E. Raymond Adkins, for his support, encouragement, and honest appraisal of my work.

    I (PJM) am indebted to Elaine McEwan, who constantly prodded me—whether she realized it or not—to link my research endeavors to the “real world” of education practice (every education researcher should have an award-winning principal handy to ask the all-important question: “So what?”). I also am deeply grateful to my colleague and mentor, Henry M. Levin, who thoughtfully combines research and practice in education—and who unstintingly shares his lessons with others.

    Last, we both appreciate the frequent readings of this manuscript by our daughter and sister, Emily McEwan-Fujita. Although engaged in personal research and writing, she always took the time to offer helpful comments and suggest excellent alternatives.

    In addition, Corwin Press gratefully acknowledges the contributions of the following reviewers:

    • Dr. Jim Duncan
    • Director of Schools
    • Wilson County Schools
    • Lebanon, TN
    • Linda S. Mueller,
    • Aces High School
    • Principal/Administrator
    • Everett, WA
    • Eleanor Perry, ASC Professor
    • Arizona State University
    • ASUW College of Education
    • Tempe, AZ
    • Carolyn S. Ridenour, Professor
    • University of Dayton
    • Department of Educational Leadership
    • School of Education and Allied Professions
    • Dayton, OH
    • Steven A. Schmitz, Professor
    • College of Education and Professional Studies
    • Department of Curriculum and Supervision
    • Central Washington University
    • Ellensburg, WA

    About the Authors

    Elaine K. McEwan is an educational consultant with The McEwan-Adkins Group offering workshops and consulting services in instructional leadership, school improvement, and raising reading achievement K-12. A former teacher, media specialist, principal, and assistant superintendent for instruction in a suburban Chicago school district, she is the author of more than 30 books for parents, children, and educators. Some of her titles include The Principal's Guide to Raising Reading Achievement (1998), The Principal's Guide to Raising Math Achievement (2000), Raising Reading Achievement in Middle and High Schools: Five Simple-to-Follow Strategies for Principals (2001), Ten Traits of Highly Effective Teachers: How to Hire, Mentor, and Coach Successful Teachers (2001), and Seven Steps to Effective Instructional Leadership, Second Edition (2002). She was honored by the Illinois Principals Association as an outstanding instructional leader, by the Illinois State Board of Education with an Award of Excellence, and by the National Association of Elementary School Principals as the National Distinguished Principal from Illinois for 1991. She received advanced degrees in library science (MA) and educational administration (EdD) from Northern Illinois University. Visit Elaine's Web site at http://www.elainemcewan.com, where you can contact her, read excerpts from some of her books, or learn about available on-location or online workshops.

    Patrick J. McEwan is Assistant Professor in the Department of Economics at Wellesley College in Wellesley, Massachusetts, and an affiliate of the David Rockefeller Center for Latin American Studies at Harvard University. Previously, he taught in the Department of Educational Policy Studies at the University of Illinois at Urbana-Champaign and served as Assistant Director for Research at the National Center for the Study of Privatization in Education at Teachers College, Columbia University. He completed his PhD in education at Stanford University, in addition to master's degrees in economics and international development. His published books (with Henry Levin) include Cost-Effectiveness Analysis: Methods and Applications, Second Edition (2001) and Cost-Effectiveness and Educational Policy: 2002 Yearbook of the American Education Finance Association (2002). He is the author of numerous journal articles, book chapters, and reports, and he has consulted on education policy and evaluation at the Inter-American Development Bank, RAND, UNESCO, and the ministries of education of several countries. His recent research (with Martin Carnoy) has evaluated the impact of Chile's national voucher plan on the effectiveness and efficiency of primary education.

    Corwin Press

    The Corwin Press logo—a raven striding across an open book—represents the happy union of courage and learning. We are a professional-level publisher of books and journals for K-12 educators, and we are committed to creating and providing resources that embody these qualities. Corwin's motto is “Success for All Learners.”

  • Resource: Bibliographies for Case Studies

    Class Size Reduction
    Angrist, J., & Levy, V. (1999). Using Maimonides' rule to estimate the effect of class size on scholastic achievement. Quarterly Journal of Economics, 114(2), 533–576. http://dx.doi.org/10.1162/003355399556061
    Boozer, M., & Rouse, C. (1995). Intraschool variation in class size: Patterns and implications (Working Paper No. 5144). Cambridge, MA: National Bureau of Economic Research.
    Brewer, D. J., Krop, C., Gill, B. P., & Reichardt, R. (1999). Estimating the cost of national class size reductions under different policy alternatives. Educational Evaluation and Policy Analysis, 21(2), 179–192. http://dx.doi.org/10.3102/01623737021002179
    Grissmer, D. (1999). Class size effects: Assessing the evidence, its policy implications, and future research agenda. Educational Evaluation and Policy Analysis, 21(2), 231–248. http://dx.doi.org/10.3102/01623737021002231
    Grissmer, D. (2002). Cost-effectiveness and cost-benefit analysis: The effect of targeting interventions. In H. M.Levin & P. J.McEwan (Eds.), Cost-effectiveness and educational policy: 2002, yearbook of the American Education Finance Association (pp. 97–110). Larchmont, NY: Eye on Education.
    Hanushek, E. A. (1997). Assessing the effect of school resources on student performance: An update. Educational Evaluation and Policy Analysis, 19(2), 141–164.
    Hanushek, E. A. (1999). Some findings from an independent investigation of the Tennessee STAR experiment and from other investigations of class size effects. Educational Evaluation and Policy Analysis, 21(2), 143–163. http://dx.doi.org/10.3102/01623737021002143
    Harris, D. (2002). Identifying optimal class sizes and teacher salaries. In H. M.Levin & P. J.McEwan (Eds.), Cost-effectiveness and educational policy: 2002, yearbook of the American Education Finance Association (pp. 177–191). Larchmont, NY: Eye on Education.
    Illig, D. C. (1996). Reducing class size: A review of the literature and options for consideration. Retrieved June 11, 2002, from http://www.library.ca.gov/CRB/clssz/clssiz.html#RTFToC13
    Jacobson, L. (2001, Feb. 28). Research: Sizing up small classes. Education Week. Retrieved April 23, 2002, from http://www.edweek.org/ew/ewstory.cfm?slug=24classsize.h20
    Krueger, A. B. (2000, October). Understanding the magnitude and effect of class size on student achievement (Working Paper No. 121). Economic Policy Institute. Retrieved May 27, 2002, from http://www.epinet.org/Workingpapers/class_size.html
    Krueger, A. B., & Whitmore, D. M. (2001). The effect of attending a small class in the early grades on college-test taking and middle school test results: Evidence from Project STAR. Economic Journal, 111(468), 1–28. http://dx.doi.org/10.1111/1468-0297.00586
    Levin, H. M., Glass, G. V., & Meister, G. R. (1987). Cost-effectiveness of computer assisted instruction. Evaluation Review, 11(1), 50–72. http://dx.doi.org/10.1177/0193841X8701100103
    Molnar, A., Smith, P., Zahorik, J., Palmer, A., Halbach, A., & Ehrle, K. (1999). Evaluating the SAGE Program: A pilot program in targeted pupil-teacher reduction. Educational Evaluation and Policy Analysis, 21(2), 165–177. http://dx.doi.org/10.3102/01623737021002165
    Mosteller, F. (1995). The Tennessee study of class size in the early school grades. The Future of Children, 5(2), 113–127. http://dx.doi.org/10.2307/1602360
    National Center for Education Statistics. (2002). High school and beyond. Retrieved May 28, 2002, from http://nces.ed.gov/surveys/hsb/
    Nye, B., Hedges, L. V., & Konstantopoulos, S. (1999). The long-term effects of small classes: A five-year follow-up of the Tennessee class size experiment. Educational Evaluation and Policy Analysis, 21(2), 127–142. http://dx.doi.org/10.3102/01623737021002127
    Stecher, B. M., & Bornstedt, G. W. (2002). Class size reduction in California: Summary findings from 1999–00 and 2000–01 (Technical Report). CSR Research Consortium. Retrieved May 28, 2002, from http://www.classize.org/techreport/index-01.htm
    Urquiola, M. (2000). Identifying class size effects in developing countries: Evidence from rural schools in Bolivia. Unpublished manuscript, Cornell University.
    Phonics Instruction
    Baker, S., Berninger, V. W., Bruck, M., Chapman, J., Eden, G., Elbaum, B., et al., (2002, May 21). Evidence-based research on Reading Recovery. [A letter sent to educational policymakers regarding the effectiveness of Reading Recovery] Retrieved May 28, 2002, from http://www.educationnews.org/ReadingRecoveryisnotsuccessful.htm
    Carbo, M. (1988). Debunking the great phonics myth. Phi Delta Kappan, 70, 226–240.
    Carbo, M. (1996). Whole language or phonics? Use both. Education Digest, 61, 60–64.
    Chall, J. S. (1967). Learning to read: The great debate. New York: McGraw-Hill.
    Chall, J. S. (1989). Learning to read: The great debate 20 years later—A response to “Debunking the Great Phonics Myth.”Phi Delta Kappan, 70, 521–537.
    Clay, M. M. (1985). The early detection of reading difficulties (3rd ed.). Auckland: Heinemann.
    Coles, G. S. (1997, April 2). Phonics findings discounted as part of flawed research [Letter to the editor]. Education Week, 45.
    Dyer, P. (1992). Reading Recovery: A cost-effectiveness and educational outcomes analysis. ERS Spectrum, 10, 10–19.
    Ehri, L. C., Nunes, S. R., Stahl, S. A., & Willows, D. (2001). Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel's meta-analysis. Review of Educational Research, 71(3), 393–447. http://dx.doi.org/10.3102/00346543071003393
    Elbaum, B., Vaughn, S., Hughes, M. T., & Moody, S. W. (2000). How effective are one-to-one tutoring programs in reading for elementary students at-risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605–619. http://dx.doi.org/10.1037/0022-0663.92.4.605
    Evans, T. L. P. (1996). I can read deze books: A quantitative comparison of the Reading Recovery program and a small-group intervention. Unpublished doctoral dissertation, Auburn University, Auburn, Alabama.
    Foorman, B. R., Fletcher, J. M., Francis, D. J., Schatschneider, C., & Mehta, P. (1998). The role of instruction in learning to read: Preventing reading failure in at-risk children. Journal of Educational Psychology, 90(1), 37–55. http://dx.doi.org/10.1037/0022-0663.90.1.37
    Foorman, B. R., Fletcher, J. M., Francis, D. J., & Schatschneider, C. (2000). Response: Misrepresentation of research by other researchers. Educational Research, 29(6), 27–37. http://dx.doi.org/10.3102/0013189X029006027
    Gaskins, I. W. (1998). There's more to teaching at-risk and delayed readers than good reading instruction. Reading Teacher, 51(7), 534–547.
    Gaskins, I. W., Downer, M. A., Anderson, R. C., Cunningham, P. M., Gaskins, R. W., & Schommer, M. (1988). A metacognitive approach to phonics: Using what you know to decode what you don't know. Remedial and Special Education, 9(1), 36–41. http://dx.doi.org/10.1177/074193258800900107
    Gaskins, I. W., Ehri, L. C., Cress, C., O'Hara, C., & Donnelly, K. (1997a). Analyzing words and making discoveries about the alphabetic system: Activities for beginning readers. Language Arts, 74, 172–184.
    Gaskins, I. W., Ehri, L. C., Cress, C., O'Hara, C., & Donnelly, K. (1997b). Procedures for word learning: Making discoveries about words. The Reading Teacher, 50(4), 312–327.
    Greaney, K. T., Tunmer, W. E., & Chapman, J. W. (1997). Effects of rime-based orthographic analogy training on the word recognition skills of children with reading disability. Journal of Educational Psychology, 89, 645–651. http://dx.doi.org/10.1037/0022-0663.89.4.645
    Hiebert, E. H. (1994). Reading Recovery in the United States: What difference does it make to an age cohort?Educational Researcher, 23(9), 15–25. http://dx.doi.org/10.3102/0013189X023009015
    Lindamood, C., & Lindamood, P. (1984). Auditory discrimination in depth. San Luis Obispo, CA: Gander.
    Mathes, P. G., & Torgesen, J. K. (1997). A call for equity in reading instruction for all students: A response to Allington and Woodside-Jiron. Educational Researcher, 29(6), 4–14. http://dx.doi.org/10.3102/0013189X029006004
    McEwan, E. K. (2002). Teach them all to read: Catching the kids who fall through the cracks. Thousand Oaks, CA: Corwin.
    National Reading Panel. (2000). Report of the National Reading Panel: Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Reports of the Subgroups. Rockville, MD: National Institute of Child Health and Human Development.
    Reading Recovery Council of North America. (2002a). Reading Recovery Facts and Figures (U.S. 1984–1999). Retrieved May 28, 2002, from http://www.readingrecovery.org/sections/reading/facts.asp
    Reading Recovery Council of North America (2002b). The cost-benefits of Reading Recovery. Retrieved May 28, 2002, from http://www.readingrecovery.org/sections/reding/cost.asp
    Taylor, B. M., Anderson, R. C., Au, K. H., Raphael, T. E. (2000). Discretion in the translation of research to policy: A case from beginning reading. Educational Researcher, 29(6), 16–26. http://dx.doi.org/10.3102/0013189X029006016
    Torgesen, J. K., Wagner, R. K., Rashotte, C. A., Rose, E., Lindamood, P., & Conway, T. (1999). Preventing reading failure in young children with phonological processing difficulties: Group and individual responses to instruction. Journal of Educational Psychology, 91(4), 579–593. http://dx.doi.org/10.1037/0022-0663.91.4.579
    Private-School Vouchers
    Bartell, E. (1968). Costs and benefits of Catholic elementary and secondary schools. Notre Dame, IN: Notre Dame Press.
    Boaz, D., & Barrett, R. M. (1996). What would a school voucher buy? The real cost of private schools (Cato Briefing Paper No. 25). Washington, DC: Cato Institute.
    Bryk, A. S., Lee, V. E., & Holland, P. B. (1993). Catholic schools and the common good. Cambridge, MA: Harvard University Press.
    Chubb, J. E., & Moe, T. M. (1990). Politics, markets, and America's schools. Washington, DC: Brookings Institution.
    Coleman, J. S., Campbell, E. Q., Hobson, C. J., McPartland, J., Mood, A. M., Weinfeld, F. D., & York, R. L. (1966). Equality of educational opportunity (Office of Education Publication No. OE-38001). Washington, DC: Government Printing Office.
    Coleman, J. S., Hoffer, T., & Kilgore, S. (1982). High school achievement: Public, Catholic, and private schools compared. New York: Basic Books.
    Evans, W. N., & Schwab, R. M. (1995). Finishing high school and starting college: Do Catholic schools makes a difference?Quarterly Journal of Economics, 110(4), 941–974. http://dx.doi.org/10.2307/2946645
    Fetterman, D. M. (1982). Ibsen's baths: Reactivity and insensitivity. Educational Evaluation and Policy Analysis, 4, 261–279.
    Friedman, M. (1955). The role of government in education. In R. A.Solo (ed.), Economics and the public interest (pp. 123–144). New Brunswick, NJ: Rutgers University Press.
    Gamoran, A. (1996). Student achievement in public magnet, public comprehensive, and private city high schools. Educational Evaluation and Policy Analysis, 18(1), 1–18.
    Greene, J. P., Peterson, P. E., & Du, J. (1998). School choice in Milwaukee: A randomized experiment. In P. E.Peterson & B. C.Hassel (Eds.), Learning from school choice (pp. 335–356). Washington, DC: Brookings Institution.
    Howell, W. G., Wolf, P. J., Campbell, D. E., & Peterson, P. E. (2002). School vouchers and academic performance: Results from three randomized field trials. Journal of Policy Analysis and Management, 21(2), 191–217. http://dx.doi.org/10.1002/pam.10023
    Hoxby, C. M. (1998). What do America's “traditional” forms of school choice teach us about school choice reforms?Federal Reserve Bank of New York Economic Policy Review, 4(1), 47–59.
    King, J. A. (1994). Meeting the needs of at-risk students: A cost analysis of three models. Educational Evaluation and Policy Analysis, 16, 1–19.
    Levin, H. M. (1991). The economics of educational choice. Economics of Education Review, 10(2), 137–158. http://dx.doi.org/10.1016/0272-7757%2891%2990005-A
    Levin, H. M. (1998). Educational vouchers: Effectiveness, choice, and costs. Journal of Policy Analysis and Management, 27(3), 373–391. http://dx.doi.org/10.1002/%28SICI%291520-6688%28199822%2917:3%3C373::AID-PAM1%3E3.0.CO;2-D
    Levin, H. M. (2002). Issues in designing cost-effectiveness comparisons of whole-school reform. In H. M.Levin & P. J.McEwan (Eds.), Cost-effectiveness and educational policy: 2002, yearbook of the American Education Finance Association (pp. 71–96). Larchmont, NY: Eye on Education.
    Levin, H. M., & Driver, C. E. (1997). Costs of an educational voucher system. Education Economics, 5(3), 265–283. http://dx.doi.org/10.1080/09645299700000023
    McEwan, P. J. (2000). The potential impact of large-scale voucher programs. Review of Educational Research, 70(2), 103–149. http://dx.doi.org/10.3102/00346543070002103
    Mayer, D. P., Peterson, P. E., Myers, D. E., Turtle, C. C., & Howell, W. G. (2002). School choice in New York City after three years: An evaluation of the School Choice Scholarships Program (Report 8404–045, Mathematica Policy Research). Retrieved May 25, 2002, from http://www.mathematica-mpr.com/PDFs/nycfull.pdf
    Millsap, M. A., Chase, A., Obeidallah, D., Perez-Smith, A., Brigham, N., & Johnson, K. (2000). Evaluation of Detroit's Comer Schools and Families Initiative: Final report. Cambridge, MA: Abt.
    Moe, T. M. (Ed.). (1995). Private vouchers. Stanford, CA: Hoover Institution.
    Molnar, A., Smith, P., Zahorik, J., Palmer, A., Halbach, A., & Ehrle, K. (1999). Evaluating the SAGE Program: A pilot program in targeted pupil-teacher reduction. Educational Evaluation and Policy Analysis, 21(2), 165–177. http://dx.doi.org/10.3102/01623737021002165
    Neal, D. (1998). What have we learned about the benefits of private schooling?Federal Reserve Bank of New York Economic Policy Review, 4(1), 79–86.
    Olson, L. (1996, Sept. 4). New studies on private choice contradict each other. Education Week. Retrieved May 28, 2002, from http://www.edweek.org/ew/vol-16/01choice.h16
    Rouse, C. E. (1998). Private school vouchers and student achievement: An evaluation of the Milwaukee parental choice program. Quarterly Journal of Economics, 113(2), 553–602. http://dx.doi.org/10.1162/003355398555685
    West, E. G. (1967). Tom Paine's voucher scheme for public education. Southern Economic Journal, 33, 378–382. http://dx.doi.org/10.2307/1055119
    Witte, J. F. (1998). The Milwaukee voucher experiment. Educational Evaluation and Policy Analysis, 20(4), 229–251.
    Whole-School Reform
    Barnett, W. S. (1996). Economics of school reform: Three promising models. In H. F.Ladd (ed.), Holding schools accountable: Performance-based reform in education (pp. 299–326). Washington, DC: Brookings Institution.
    Bloom, H., Ham, S., Kagehiro, S., Melton, L., O'Brien, J., Rock, J., & Doolittle, F. (2001). Evaluating the Accelerated Schools Program: A look at its early implementation and impact on student achievement in eight schools. New York: Manpower Development Research Corporation.
    Cook, T. D.Habib, F. N.Phillips, M., Settersten, R. A., Shagle, S. C., & Degirmencioglu, S. M. (1999). Comer's School Development Program in Prince George's County, Maryland: A theory-based evaluation. American Educational Research Journal, 36(3), 543–597. http://dx.doi.org/10.3102/00028312036003543
    Comer School Development Program. (2002, February 14). Retrieved May 29, 2002, from Yale University, Yale Child Study Center Web Site http://www.med.yale.edu/comer
    Cook, T. D., Murphy, R. F., & Hunt, H. D. (2000). Comer's School Development Program in Chicago: A theory-based evaluation. American Educational Research Journal, 37(2), 535–597. http://dx.doi.org/10.3102/00028312037002535
    Levin, H. M. (2002). Issues in designing cost-effectiveness comparisons of whole-school reform. In H. M.Levin & P. J.McEwan (Eds.), Cost-effectiveness and educational policy: 2002, yearbook of the American Education Finance Association (pp. 71–96). Larchmont, NY: Eye on Education.
    National Center for Accelerated Schools. (2002). Retrieved April 23, 2002, from http://www.acceleratedschools.net/
    Northwest Regional Educational Laboratory (2001). The catalog of school reform models. Retrieved April 23, 2001, from http://www.nwrel.org/scpd/catalog/modellist.asp
    Slavin, R. E, & Madden, N. A. (2001). One million children: Success for All. Thousand Oaks, CA: Corwin.

    References

    Advantage Learning Systems. (n.d.). Accelerated Reader. Wisconsin Rapids, WI: Author.
    Anderson, G. L., Herr, K., & Nihlen, A. S. (1994). Studying your own school: An educator's guide to qualitative practitioner research. Thousand Oaks, CA: Corwin.
    Andrews, Richard. (1989). The Illinois principal as instructional leader: A concept and definition paper. Illinois Principal, pp. 4–12.
    Angrist, J. D., & Krueger, A. B. (1991). Does compulsory schooling affect schooling and earnings?Quarterly Journal of Economics, 106, 979–1014. http://dx.doi.org/10.2307/2937954
    Angrist, J., & Levy, V. (1999). Using Maimonides' rule to estimate the effect of class size on scholastic achievement. Quarterly Journal of Economics, 114(2), 533–576. http://dx.doi.org/10.1162/003355399556061
    Argyris, C., Putnam, R., & Smith, D. M. (1985). Action science: Concepts, methods, and skills for research and intervention. San Francisco: Jossey-Bass.
    Ashenfelter, O., & Rouse, C. (2000). Schooling, intelligence, and income in America. In K.Arrow, S.Bowles, & S.Durlauf (Eds.), Meritocracy and economic inequality (pp. 89–117). Princeton, NJ: Princeton University Press.
    Baker, S., Berninger, V. W., Bruck, M., Chapman, J., Eden, G., Elbaum, B., et al., (2002, May 21). Evidence-based research on Reading Recovery. [A letter sent to educational policymakers regarding the effectiveness of Reading Recovery] Retrieved May 21, 2002, from http://www.educationnews.org/ReadingRecoveryisnotsuccessful.htm
    Barnes, N. (2001, April 25). What makes research useful? A look at school-based inquiry in small schools. Education Week, 40, 42.
    Barnett, W. S. (1996). Economics of school reform: Three promising models. In H. F.Ladd (ed.), Holding schools accountable: Performance-based reform in education (pp. 299–326). Washington, DC: Brookings Institution.
    Bartell, E. (1968). Costs and benefits of Catholic elementary and secondary schools. Notre Dame, IN: Notre Dame University Press.
    Berliner, D. C., & Casanova, U. (1993). Putting research to work in your school. New York: Scholastic.
    Bloom, H., Ham, S., Kagehiro, S., Melton, L., O'Brien, J., Rock, J., & Doolittle, F. (2001). Evaluating the Accelerated Schools Program: A look at its early implementation and impact on student achievement in eight schools. New York: Manpower Development Research Corporation.
    Boaz, D., & Barrett, R. M. (1996). What would a school voucher buy? The real cost of private schools (Cato Briefing Paper No. 25). Washington, DC: Cato Institute.
    Bogdan, R., & Biklen, S. N. (1998). Qualitative research for education: An introduction to theory and methods. Boston: Allyn & Bacon.
    Boozer, M., & Rouse, C. (1995). Intraschool variation in class size: Patterns and implications (Working Paper No. 5144). Cambridge, MA: National Bureau of Economic Research.
    Boruch, R. (1997). Randomized experiments for planning and evaluation: A practical guide. Thousand Oaks, CA: Sage.
    Boruch, R., De Moya, D., & Snyder, B. (2002). The importance of randomized field trials in education and related areas. In F.Mosteller & R. F.Boruch (Eds.). Evidence matters: Randomized trials in education research (pp. 50–79). Washington, DC: Brookings Institution.
    Bray, J. N., Lee, J., Smith, L. L. & Yorks, L. (2002). Collaborative inquiry in practice. Thousand Oaks, CA: Sage.
    Brewer, D. J., Krop, C., Gill, B. P., & Reichardt, R. (1999). Estimating the cost of national class size reductions under different policy alternatives. Educational Evaluation and Policy Analysis, 21(2), 179–192. http://dx.doi.org/10.3102/01623737021002179
    Brookings Institution. (1999, December 8). Can we make education policy on the basis of evidence? What constitutes high quality education research and how can it be incorporated into policymaking? In Proceedings of a Brookings Press Forum. Washington, DC: Brookings Institution.
    Brooks, A., & Watkins, K. E. (1994). A new era for action technologies: A look at the issues. In A.Brooks & K. E.Watkins (Eds.), The emerging power of action inquiry technologies (pp. 5–16). San Francisco: Jossey-Bass.
    Bruce, D. (1969). (Ed.). My brimful book: Favorite poems of childhood, Mother Goose rhymes, and animal stories. New York: Platt & Munk.
    Bryk, A. S., Lee, V. E., & Holland, P. B. (1993). Catholic schools and the common good. Cambridge, MA: Harvard University Press.
    California Department of Education. (1987). Caught in the middle. Sacramento, CA: CDE.
    Campbell Collaboration. (2001). Frequently asked questions about The Campbell Collaboration. Retrieved March 23, 2002, from http://campbell.gse.upenn.edu/c2-FAQ.htm
    Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54, 297–312. http://dx.doi.org/10.1037/h0040950
    Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand McNally
    Carbo, M. (1988). Debunking the great phonics myth. Phi Delta Kappan, 70, 226–240.
    Carbo, M. (1996). Whole language or phonics? Use both. Education Digest, 61, 60–64.
    Carnegie Corporation. (1989). Turning points: Preparing American youth for the 21st century. Washington, DC: Carnegie Council on Adolescent Development.
    Carr, W., & Kemmis, S. (1986). Becoming critical: Education, knowledge, and action research. London: Falmer.
    Carroll, L. (1946). Alice's adventures in wonderland and through the looking glass. New York: Grossett & Dunlap.
    Chall, J. S. (1967). Learning to read: The great debate. New York: McGraw-Hill.
    Chall, J. S. (1989). Learning to read: The great debate 20 years later—A response to “Debunking the Great Phonics Myth.”Phi Delta Kappan, 70, 521–537.
    Chubb, J. E., & Moe, T. M. (1990). Politics, markets, and America's schools. Washington, DC: Brookings Institution.
    Clay, M. M. (1985). The early detection of reading difficulties (
    3rd ed.
    ). Auckland, New Zealand: Heinemann.
    Clune, W. H. (2002). Methodological strength and policy usefulness of cost-effectiveness research. In H. M.Levin & P. J.McEwan (Eds.), Cost-effectiveness and educational policy: 2002, yearbook of the American Education Finance Association (pp. 55–68). Larchmont, NY: Eye on Education.
    Coleman, J. S., Campbell, E. Q., Hobson, C. J., McPartland, J., Mood, A. M., Weinfeld, F. D., & York, R. L. (1966). Equality of educational opportunity (Office of Education Publication No. OE-38001). Washington, DC: Government Printing Office.
    Coleman, J. S., Hoffer, T., & Kilgore, S. (1982). High school achievement: Public, Catholic, and private schools compared. New York: Basic Books.
    Coles, G. S. (1997, April 2). Phonics findings discounted as part of flawed research [Letter to the editor]. Education Week, p. 45.
    Comer School Development Program. (2002, February 14). Retrieved May 29, 2002, from Yale University, Yale Child Study Center Web Site http://www.med.yale.edu/comer/
    Cook, D. R., & LaFleur, N. K. (1975). A guide to educational research. Boston: Allyn & Bacon.
    Cook, T. D. (1999). Considering the major arguments against random assignment: An analysis of the intellectual culture surrounding evaluation in American schools of education. Unpublished manuscript, Northwestern University.
    Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally
    Cook, T. D., Habib, F. N., Phillips, M., Settersten, R. A., Shagle, S. C., & Degirmencioglu, S. M. (1999). Comer's School Development Program in Prince George's County, Maryland: A theory-based evaluation. American Educational Research Journal, 36(3), 543–597. http://dx.doi.org/10.3102/00028312036003543
    Cook, T. D., Murphy, R. F., & Hunt, H. D. (2000). Comer's School Development Program in Chicago: A theory-based evaluation. American Educational Research Journal, 37(2), 535–597. http://dx.doi.org/10.3102/00028312037002535
    Cooper, H., & Hedges, L. V. (Eds.). (1994). The handbook of research synthesis. New York: Russell Sage Foundation.
    Covey, S. (1990). The 7 habits of highly effective people: Powerful lessons in personal change. New York: Fireside.
    Cronbach, L. J. (1975). Beyond two disciplines of scientific psychology. American Psychologist, 30, 116–127. http://dx.doi.org/10.1037/h0076829
    Cunningham, J. B. (1993). Action research and organizational development. Westport, CT: Praeger.
    Dewey, J. (1929). The quest for certainty. New York: Minton, Balch.
    Dewey, J. (1933). How we think (Rev. ed.). Lexington, MA: Heath.
    Donald, A. (2002). A practical guide to evidence-based medicine. Medscape Psychiatry & Mental Health eJournal, 7(2). Retrieved May 28, 2002, from http://www.medscape.com/viewarticle/430709
    Dorn, R. (1995). The changing roles of principals and staff members. Wingspan, 10(2), 7–10.
    DuFour, R. (2000). Data puts a face on shared vision. Journal of Staff Development, 21(1), 71–72.
    DuFour, R., & Eaker, R. (1998). Professional learning communities at work: Best practices for enhancing student achievement. Bloomington, IN: National Educational Service.
    Dyer, P. (1992). Reading Recovery: A cost-effectiveness and educational outcomes analysis. ERS Spectrum, 10, 10–19.
    Editorial. (2002, March 20). LISAToday, p. 14A.
    Edmondson, A. (2001, June 19). Watson kills all reform models for city schools. The Memphis Commercial Appeal. Retrieved May 30, 2002, from http://nl12.newsbank.com
    Ehri, L. C., Nunes, S. R., Stahl, S. A., & Willows, D. (2001). Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel's meta-analysis. Review of Educational Research, 71(3), 393–447. http://dx.doi.org/10.3102/00346543071003393
    Elbaum, B., Vaughn, S., Hughes, M. T., & Moody, S. W. (2000). How effective are one-to-one tutoring programs in reading for elementary students at-risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605–619. http://dx.doi.org/10.1037/0022-0663.92.4.605
    Establish CSR Program Bill of 1996, Cal. S.B. 1777 (1996). Retrieved December 15, 2002, from http://www.cde.ca.gov/classsize/legis/sb_1777.htm
    Evans, T. L. P. (1996). I can read deze books: A quantitative comparison of the Reading Recovery program and a small-group intervention. Unpublished doctoral dissertation. Auburn University, Auburn, Alabama.
    Evans, W. N., & Schwab, R. M. (1995). Finishing high school and starting college: Do Catholic schools makes a difference?Quarterly Journal of Economics, 110(A), 941–974. http://dx.doi.org/10.2307/2946645
    Fetterman, D. M. (1982). Ibsen's baths: Reactivity and insensitivity. Educational Evaluation and Policy Analysis, 4, 261–279.
    Filstead, W. (1970). Qualitative methodology. Chicago: Markham.
    Fine, M. (1991). Framing dropouts: Notes on the politics of an urban high school. Albany, NY: State University of New York Press.
    Fleischer, C. (1995). Composing teacher-research: A prosaic history. Albany: State University of New York Press.
    Foorman, B. R., Fletcher, J. M., Francis, D. J., & Schatschneider, C. (2000). Response: Misrepresentation of research by other researchers. Educational Research, 29(6), 27–37. http://dx.doi.org/10.3102/0013189X029006027
    Foorman, B. R., Fletcher, J. M., Francis, D. J., & Schatschneider, C., & Mehta, P. (1998). The role of instruction in learning to read: Preventing reading failure in at-risk children. Journal of Educational Psychology, 90(1), 37–55. http://dx.doi.org/10.1037/0022-0663.90.1.37
    Freire, P. (1993). Pedagogy of the oppressed (
    rev. 20th anniv. ed.
    ). New York: Continuum. (Original work published in 1970)
    Friedman, M. (1955). The role of government in education. In R. A.Solo (ed.), Economics and the public interest (pp. 123–144). New Brunswick, NJ: Rutgers University Press.
    Gage, N. L. (1989). The paradigm wars and their aftermath: A “historical” sketch of research on teaching since 1989. Teachers College Record, 91(2), 135–156.
    Gamoran, A. (1996). Student achievement in public magnet, public comprehensive, and private city high schools. Educational Evaluation and Policy Analysis, 18(1), pp. 1–18.
    Gaskins, I. W. (1998). There's more to teaching at-risk and delayed readers than good reading instruction. Reading Teacher, 51(7), 534–547.
    Gaskins, I. W., Downer, M. A., Anderson, R. C., Cunningham, P. M., Gaskins, R. W., & Schommer, M. (1988). A metacognitive approach to phonics: Using what you know to decode what you don't know. Remedial and Special Education, 9(1), 36–41. http://dx.doi.org/10.1177/074193258800900107
    Gaskins, I. W., Ehri, L. C., Cress, C., O'Hara, C., & Donnelly, K. (1997a). Analyzing words and making discoveries about the alphabetic system: Activities for beginning readers. Language Arts, 74, 172–184.
    Gaskins, I. W., Ehri, L. C., Cress, C., O'Hara, C., & Donnelly, K. (1997b). Procedures for word learning: Making discoveries about words. The Reading Teacher, 50(4), 312–327.
    Gaskins, I. W., & Elliot, T. T. (1991). Implementing cognitive strategy instruction across the school: The Benchmark manual for teachers. Cambridge, MA: Brookline.
    Gilbert, M. (1991). Churchill: A life. New York: Henry Holt.
    Greaney, K. T., Tunmer, W. E., & Chapman, J. W. (1997). Effects of rime-based orthographic analogy training on the word recognition skills of children with reading disability. Journal of Educational Psychology, 89, 645–651. http://dx.doi.org/10.1037/0022-0663.89.4.645
    Greenberg, D., & Shroder, M. (1997). The digest of social experiments (
    2nd ed.
    ). Washington, DC: Urban Institute Press.
    Greene, J. P., Peterson, P. E., & Du, J. (1998). School choice in Milwaukee: A randomized experiment. In P. E.Peterson & B. C.Hassel (Eds.), Learning from school choice (pp. 335–356). Washington, DC: Brookings Institution.
    Greenleaf, C. (1999, April). Apprenticing adolescent readers to academic literacy. Paper presented at the annual meeting of the American Educational Research Association, Montreal.
    Greenwald, R., Hedges, L. V., & Laine, R. D. (1996). The effect of school resources on student achievement. Review of Educational Research, 66(3), 361–396. http://dx.doi.org/10.3102/00346543066003361
    Griliches, Z. (1985). Data and econometricians—the uneasy alliance. American Economic Review, 75(2), 196–200.
    Grissmer, D. (1999). Class size effects: Assessing the evidence, its policy implications, and future research agenda. Educational Evaluation and Policy Analysis, 21(2), 231–248. http://dx.doi.org/10.3102/01623737021002231
    Grissmer, D. (2002). Cost-effectiveness and cost-benefit analysis: The effect of targeting interventions. In H. M.Levin & P. J.McEwan (Eds.), Cost-effectiveness and educational policy: 2002, yearbook of the American Education Finance Association (pp. 97–110). Larchmont, NY: Eye on Education.
    Grundy, S. (1982). Three modes of action research. Curriculum Perspectives, 2(3), 23–34.
    Guryan, J. (2001). Does money matter? Regression-discontinuity estimates from education finance reform in Massachusetts (Working Paper No. 8269). Cambridge, MA: National Bureau of Economic Research.
    Hale, C. R. (2000). What is activist research?Items and Issues, 2(1–2), 13.
    Hanushek, E. A. (1986). The economics of schooling: Production and efficiency in public schools. Journal of Economic Literature, 24(3), 1141–1177.
    Hanushek, E. A. (1997). Assessing the effect of school resources on student performance: An update. Educational Evaluation and Policy Analysis, 19(2), 141–164.
    Hanushek, E. A. (1999). Some findings from an independent investigation of the Tennessee STAR experiment and from other investigations of class size effects. Educational Evaluation and Policy Analysis, 21(2), 143–163. http://dx.doi.org/10.3102/01623737021002143
    Harman, W. W. (1990). Shifting context for executive behavior: Signs of change and revaluation. In S.Srivastva, D. L.Cooperrider, and Associates (Eds.), Appreciative management and leadership: The power of positive thought and action in organizations (pp. 37–54). San Francisco: Jossey-Bass.
    Harris, D. (2002). Identifying optimal class sizes and teacher salaries. In H. M.Levin & P. J.McEwan (Eds.), Cost-effectiveness and educational policy: 2002, yearbook of the American Education Finance Association (pp. 177–191). Larchmont, NY: Eye on Education.
    Haycock, K., & Ames, N. (2000, July 24–25). Where are we now? Taking stock of middle grades education. In Proceedings of the National Conference on Curriculum, Instruction, and Assessment in the Middle Grades: Linking Research and Practice (pp. 49–77). Washington, DC: National Educational Research Policy and Priorities Board, U.S. Department of Education.
    Heron, J. (1981). Experiential research methodology. In P.Reason & J.Rowan (Eds.), Human inquiry (pp. 153–166). Chichester, UK: Wiley.
    Hiebert, E. H. (1994). Reading Recovery in the United States: What difference does it make to an age cohort?Educational Researcher, 23(9), 15–25. http://dx.doi.org/10.3102/0013189X023009015
    Howell, W. G., Wolf, P. J., Campbell, D. E., & Peterson, P. E. (2002). School vouchers and academic performance: Results from three randomized field trials. Journal of Policy Analysis and Management, 21(2), 191–217. http://dx.doi.org/10.1002/pam.10023
    Hoxby, C. M. (1998). What do America's “traditional” forms of school choice teach us about school choice reforms?Federal Reserve Bank of New York Economic Policy Review, 4(1), 47–59.
    Illig, D. C. (1996). Reducing class size: A review of the literature and options for consideration. Retrieved June 11, 2002, from http://www.library.ca.gov/CRB/clssz/clssiz.html#RTFToC13
    Interagency Education Research Initiative. (2002). Retrieved May 23, 2002, from http://www.ed.gov/offices/OERI/IERI/index.html
    Jacobson, L. (2001, Feb. 28). Research: Sizing up small classes. Education Week. Retrieved April 23, 2002, from http://www.edweek.com/ew/ewstory.cfm?slug=24classsize.h20
    Johnston, R. C. (1996, August 7). Calif. Budget allows for smaller classes. Education Week. Retrieved October 2, 2002 from http://www.edweek.com
    Jones, E. M., Gottfredson, G. D., & Gottfredson, D. C. (1997). Success for some: An evaluation of a Success for All program. Evaluation Review, 21(6), 643–670. http://dx.doi.org/10.1177/0193841X9702100601
    Jordan, H., Mendro, R., & Weerasinghe, D. (1997, July). Teacher effects on longitudinal student achievement. Paper presented at the CREATE Annual Meeting, Indianapolis, IN.
    Kaestle, C. F. (1993). The awful reputation of education research. Educational Researcher, 22(1), 26–31.
    King, J. A. (1994). Meeting the needs of at-risk students: A cost analysis of three models. Educational Evaluation and Policy Analysis, 16, 1–19.
    King, G., Keohane, R. D., & Verba, S. (1994). Designing social inquiry: Scientific inference in qualitative research. Princeton, NJ: Princeton University Press.
    Krueger, A. B. (1999). Experimental estimates of education production functions. Quarterly Journal of Economics, 114(2), 497–532. http://dx.doi.org/10.1162/003355399556052
    Krueger, A. B. (2000, October). Understanding the magnitude and effect of class size on student achievement (Working Paper No. 121, Economic Policy Institute). Retrieved May 27, 2002, from http://www.epinet.org/Workingpapers/classsize.html
    Krueger, A. B., & Whitmore, D. M. (2001). The effect of attending a small class in the early grades on college-test taking and middle school test results: Evidence from Project STAR. Economic Journal, 111(468), 1–28. http://dx.doi.org/10.1111/1468-0297.00586
    Lagemann, E. C. (2000). An elusive science: The troubling history of education research. Chicago: University of Chicago Press.
    Lagemann, E. C. (2002, January 24). Useable knowledge in education: A memorandum for the Spencer Foundation Board of Directors. Retrieved March 13, 2002, from http://www.spencer.org/publications/usable_knowledge_report_ecl_a.htm
    Lagemann, E. C. & Shulman, L. S. (Eds.). (1999). Issues in education research: Problems and possibilities. San Francisco: Jossey-Bass.
    Levin, H. M. (1975). Cost-effectiveness in evaluation research. In M.Guttentag & E.Struening (Eds.), Handbook of evaluation research (Vol. 2, pp. 89–122). Beverly Hills, CA: Sage.
    Levin, H. M. (1991). The economics of educational choice. Economics of Education Review, 10(2), 137–158. http://dx.doi.org/10.1016/0272-7757%2891%2990005-A
    Levin, H. M. (1998). Educational vouchers: Effectiveness, choice, and costs. Journal of Policy Analysis and Management, 27(3), 373–391. http://dx.doi.org/10.1002/%28SICI%291520-6688%28199822%2917:3%3C373::AID-PAM1%3E3.0.CO;2-D
    Levin, H. M. (2002). Issues in designing cost-effectiveness comparison of whole-school reforms. In H. M.Levin & P. J.McEwan (Eds.), Cost-effectiveness and educational policy: 2002, yearbook of the American Education Finance Association (pp. 71–96). Larchmont, NY: Eye on Education.
    Levin, H. M., & Driver, C. E. (1997). Costs of an educational voucher system. Education Economics, 5(3), 265–283. http://dx.doi.org/10.1080/09645299700000023
    Levin, H. M., Glass, G. V., & Meister, G. R. (1987). Cost-effectiveness of computer assisted instruction. Evaluation Review, 11(1), 50–72. http://dx.doi.org/10.1177/0193841X8701100103
    Levin, H. M., & McEwan, P. J. (2001). Cost-effectiveness analysis: Methods and applications (
    2nd ed.
    ). Thousand Oaks, CA: Sage.
    Levin, H. M., & McEwan, P. J. (Eds.). (2002). Cost-effectiveness and educational policy: 2002, yearbook of the American Education Finance Association. Larchmont, NY: Eye on Education.
    Lewin, K. (1946). Action research and minority problems. Journal of Social Issues2(4), 34–46. http://dx.doi.org/10.1111/j.1540-4560.1946.tb02295.x
    Lewis-Beck, M. S. (1980). Applied regression: An introduction (Quantitative Applications in the Social Sciences, No. 22). Newbury Park, CA: Sage.
    Lindamood, C., & Lindamood, P. (1984). Auditory discrimination in depth. San Luis Obispo, CA: Gander.
    Lindblom, C. E., & Cohen, D. K. (1979). Usable knowledge: Social science and social problem solving. New Haven, CT: Yale University Press.
    Lipsitz, J. (1984). Successful schools for young adolescents. New Brunswick, NJ: Transaction.
    Masse, L. N., & Barnett, W. S. (2002). A benefit-cost analysis of the Abecedarian Early Childhood Intervention. In H. M.Levin & P. J.McEwan (Eds.), Cost-effectiveness and educational policy: 2002, yearbook of the American Education Finance Association (pp. 157–176). Larchmont, NY: Eye on Education.
    Mathes, P. G., & Torgesen, J. K. (1997). A call for equity in reading instruction for all students: A response to Allington and Woodside-Jiron, (p. 6). Educational Researcher, 29(6), 4–14.
    Mayer, D. P., Peterson, P. E., Myers, D. E., Turtle, C. C., & Howell, W. G. (2002). School choice in New York City after three years: An evaluation of the School Choice Scholarships Program (Report 8404–045). Mathematica Policy Research. Retrieved May 25, 2002, from http://www.mathematica-mpr.com/PDFs/nycfull.pdf
    McEwan, E. K. (2002). Teach them all to read: Catching the kids who fall through the cracks. Thousand Oaks, CA: Corwin.
    McEwan, P. J. (2000). The potential impact of large-scale voucher programs. Review of Educational Research, 70(2), 103–149. http://dx.doi.org/10.3102/00346543070002103
    McEwan, P. J. (2002). Are cost-effectiveness methods used correctly? In H. M.Levin & P. J.McEwan (Eds.), Cost effectiveness and educational policy: 2002 Yearbook of the American Educational Finance Association (pp. 37–53). Larchmont, NY: Eye on Education.
    McKechnie, J. L. (Ed.). (1983). Webster's new universal unabridged dictionary (
    2nd ed.
    ). New York: Simon & Schuster.
    McLaren, P. (1998). Life in schools: an introduction to critical pedagogy in the foundations of education. New York: Longman.
    Miller, D. W. (1999, August 6). The black hole of education research. Chronicle of Higher Education, A17–18.
    Millsap, M. A., Chase, A., Obeidallah, D., Perez-Smith, A., Brigham, N., & Johnson, K. (2000). Evaluation of Detroit's Comer Schools and Families Initiative: Final report. Cambridge, MA: Abt.
    Moe, T. M. (Ed.). (1995). Private vouchers. Stanford, CA: Hoover Institution.
    Molnar, A., Smith, P., Zahorik, J., Palmer, A., Halbach, A., & Ehrle, K. (1999). Evaluating the SAGE Program: A pilot program in targeted pupil-teacher reduction. Educational Evaluation and Policy Analysis, 21(2), 165–177. http://dx.doi.org/10.3102/01623737021002165
    Mosteller, F. (1995). The Tennessee study of class size in the early school grades. The Future of Children, 5(2), 113–127. http://dx.doi.org/10.2307/1602360
    Mosteller, F., & Boruch, R. (Eds.). (2002). Evidence matters: Randomized trials in education research. Washington, DC: Brookings Institution.
    National Center for Accelerated Schools. (2002a). General information. Retrieved April 23, 2002, from http://www.acceleratedschools.net/
    National Center for Accelerated Schools. (2002b). General information. Retrieved June 15, 2002, from http://www.acceleratedschools.net/main_gen.htm
    National Center for Education Statistics. (2001, January 24). NAEP 1988 reading report card. National and state highlights. Retrieved May 29, 2001 from http://nces.ed.gov/nationsreportcard/reading/stureadmore.asp
    National Center for Education Statistics. (2002a). High school and beyond. Retrieved May 28, 2002, from http://nces.ed.gov/surveys/hsb/
    National Center for Education Statistics. (2002b). National Education Longitudinal Study (NELS). Retrieved May 28, 2002, from http://nces.ed.gov/surveys/nels88/
    National Clearinghouse for Comprehensive School Reform. (2002, May 24). Home page. Retrieved June 1, 2002, from http://www.goodschools.gwu.edu
    National Educational Research Policy and Priorities Board. (2000a). A blueprint for progress in American education (A White Paper). Washington, DC: Author.
    National Educational Research Policy and Priorities Board. (2000b). Proceedings of the National Conference on Curriculum, Instruction, and Assessment in the Middle Grades: Linking Research and Practice, July 24–25, 2000. Washington, DC: Author. Retrieved May 28, 2002, from http://www.ed.gov/offices
    National Educational Research Policy and Priorities Board. (2002). Mission statement. Retrieved May 28, 2002, from http://www.ed.gov/offices/
    National Forum to Accelerate Middle Grades Reform (2002). Schools to watch: Selection criteria. Retrieved on May 28, 2002, from http://www.mgforum.org/criteria.asp
    National Middle School Association. (1982). This we believe. Columbus, OH: Author.
    National Reading Panel. (2000). Report of the National Reading Panel: Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Reports of the Subgroups. Rockville, MD: National Institute of Child Health and Human Development.
    National Science Foundation. (2002). NSF creation and mission. Retrieved May 31, 2002, from http://www.nsf.gov/home/about/creation.htm
    Neal, D. (1998). What have we learned about the benefits of private schooling?Federal Reserve Bank of New York Economic Policy Review, 4(1), 79–86.
    No Child Left Behind Act of 2002, Pub. L. No. 107–110 115 Stat.1425, H. R. 1 (2002). Retrieved May 28, 2002, from http://www.ed.gov.legislation/ESEA02/
    Noblit, G. E., & Hare, D. W., (1988). Meta-ethnography: Synthesizing qualitative studies. Thousand Oaks, CA: Sage.
    Northwest Regional Educational Laboratory. (2001). The catalog of school reform models. Retrieved April, 23, 2001 from http://www.nwrel.org/scpd/catalog/modellist.asp
    Nye, B., Hedges, L. V., & Konstantopoulos, S. (1999). The long-term effects of small classes: A five-year follow-up of the Tennessee class size experiment. Educational Evaluation and Policy Analysis, 21(2), 127–142. http://dx.doi.org/10.3102/01623737021002127
    Olson, L. (1996, Sept. 4). New studies on private choice contradict each other. Education Week. Retrieved May 28, 2002, from http://www.edweek.org/ew/vol-16/01choice.h16
    Olson, L., & Viadero, D. (2002, January 30). Law mandates scientific base for research. Education Week, 1, 14–15.
    Orr, L. L. (1999). Social experiments: Evaluating public programs with experimental methods. Thousand Oaks, CA: Sage.
    Patton, M. Q. (1986). Utilization-focused evaluation. Thousand Oaks, CA: Sage.
    Patton, M. Q. (2002). Qualitative research & evaluation methods (
    3rd ed.
    ). Thousand Oaks, CA: Sage.
    Peters, T., & Waterman, R. H. (1982). In search of excellence: Lessons from America's best-run companies. New York: Harper & Row.
    Phillips, D. C. (2000). The expanded social scientist's bestiary: A guide to fabled threats to, and defenses of, naturalistic social science. Lanham, MD: Rowman & Littlefield.
    Pilgreen, J. L. (2000). The SSR handbook: How to organize and manage a sustained silent reading program. Portsmouth, NH: Boyton/Cook.
    Policy Studies Associates, Inc. (2002). A validation report on READ 180: A print and electronic adaptive intervention program, grades 4–8. Washington, DC: Author.
    Public Agenda. (2002). Reality check 2002. New York: Author.
    Pressley, M., Burkell, J., Cariglia-Bull, T., Lysynchuk, L., McGoldrick, J. A., Schneider, B., Snyder, B., Symons, S., & Woloshyn, V. E. (1995). Cognitive strategy instruction that really improves children's academic performance. Cambridge, MA: Brookline.
    Ravitch, D. (1998, December 16). What if research really mattered?Education Week. Retrieved February 25, 2002, from http://www.edweek.com
    Reading Recovery Council of North America. (2002a). Reading Recovery facts and figures (U.S. 1984–1999). Retrieved May 28, 2002, from http://www.readingrecovery.org/sections/reading/facts.asp
    Reading Recovery Council of North America (2002b). The cost-benefits of Reading Recovery. Retrieved May 28, 2002, from http://www.readingrecovery.org
    Revans, R. (1982). What is action learning?Journal of Management Development, 2(3), 64–75. http://dx.doi.org/10.1108/eb051529
    Rice, J. K. (1997). Cost analysis in education: Paradox and possibility. Educational Evaluation and Policy Analysis, 19(4), 309–317.
    Rouse, C. E. (1998). Private school vouchers and student achievement: An evaluation of the Milwaukee parental choice program. Quarterly Journal of Economics, 113(2), 553–602. http://dx.doi.org/10.1162/003355398555685
    Ruffini, S. (1992). Assessment of Success for All school years 1988–1991. Unpublished report. Baltimore: Baltimore City Public Schools, Department of Research and Evaluation.
    Sanford, N. (1981). A model for action research. In P.Reason & J.Rowan (Eds.), Human inquiry (pp. 173–181). Chichester, UK: Wiley.
    Santa, C. M. (1986). Content reading in secondary school. In J.Orasanu. (ed.), Reading comprehension: From research to practice (pp. 303–317). Hillsdale, NJ: Lawrence Erlbaum.
    Sarason, S. B. (1996). Revisiting “The Culture of the School and the Problem of Change.”New York: Teachers College Press.
    Schmoker, M. (2001). The results fieldbook: Practical strategies from dramatically improved schools. Alexandria, VA: Association for Supervision and Curriculum Development.
    SchoenbachR., Greenleaf, C., Cziko, C., & Hurwitz, L. (1999). Reading for understanding: A guide to improving reading in middle and high school classrooms. San Francisco: Jossey-Bass.
    Scholastic, (n.d). READ180: Proven intervention that turns lives around (Brochure). New York: Author.
    Schwandt, T. A. (2001). Dictionary of qualitative inquiry (
    2nd ed.
    ). Thousand Oaks, CA: Sage.
    Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.
    Shaker, P., & Heilman, E. E. (2002). Advocacy versus authority—Silencing the education professoriate. Policy Perspectives, 3(1), 1–6.
    Shavelson, R. J., & Towne, L. (Eds.). (2002). Scientific inquiry in education. Washington, DC: National Academy Press.
    Silverman, D. (2001). Interpreting qualitative data. London: Sage.
    Slavin, R. E., & Madden, N. A. (1995, April). Effects of Success for All on the achievement of English language learners. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.
    Slavin, R. E., & Madden, N. A. (1999). Success for All/Roots & Wings: Summary of research on achievement outcomes (Report No. 41). Baltimore: Johns Hopkins University, Success for All Foundation.
    Slavin, R. E, & Madden, N. A. (2001). One million children: Success for All. Thousand Oaks, CA: Corwin.
    Slavin, R. E., Madden, N. E., Dolan, L. J., & Wasik, B. A. (1996). Every child, every school: Success for all. Thousand Oaks, CA: Corwin.
    Smith, M. L., & Glass, G. V. (1987). Research and evaluation in education and the social sciences. Englewood Cliffs, NJ: Prentice Hall.
    Snow, C. E. (2001). Knowing what we know: Children, teachers, researchers. Educational Researcher, 30(7), 3–9. http://dx.doi.org/10.3102/0013189X030007003
    Snow, C. E.Burns, M. W., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press, Committee on the Prevention of Reading Difficulties in Young Children, Commission on Behavioral and Social Sciences and Education, National Research Council.
    Spindler, G. D. (1982). Doing the ethnography of schooling: Educational anthropology in action. New York: Holt, Rinehart, & Winston.
    Srivastva, S., Fry, R. E., & Cooperrider, D. L. (1990). Introduction: The call for executive appreciation. In S.Srivastva, D. L.Cooperrider, and Associates (Eds.), Appreciative management and leadership: The power of positive thought and action in organizations (pp. 1–33). San Francisco: Jossey-Bass.
    Stecher, B. M., & Bornstedt, G. W. (2002). Class size reduction in California: Summary findings from 1999–00 and 2000–01 (Technical Report, CSR Research Consortium). Retrieved May 28, 2002, from http://www.classize.org/techreport/index-01.htm
    Stokes, D. E. (1997). Pasteur's quadrant: Basic science and technological innovation. Washington, DC: Brookings Institution.
    Strategic Education Research Partnership. (2002). Current BBCSSE (Behavioral, Cognitive, Sensory Science, and Education) Projects. Retrieved November 22, 2002, from http://www7.nationalacademies.org
    Success for All Foundation. (2002). Our history. Retrieved May 28, 2002, from http://www.successforallNet/about/history.htm
    Taylor, B. M., Anderson, R. C., Au, K. H., Raphael, T. E. (2000). Discretion in the translation of research to policy: A case from beginning reading. Educational Researcher, 29(6), 16–26. http://dx.doi.org/10.3102/0013189X029006016
    ten Have, P. (1998) Doing conversation analysis: A practical guide. London: Sage.
    Torbert, W. R., (1981). Why education research has been so uneducational: The case for a new model of social science based on collaborative inquiry. In P.Reason (ed.), Human inquiry in action: Developments in new paradigm research (pp. 141–151). London: Sage.
    Torbert, W. R. (1987). Managing the corporate dream: Restructuring for long-term success. Homewood, IL: Dow Jones-Irwin.
    Torbert, W. R. (1991). The power of balance. Newbury Park, CA: Sage.
    Torgesen, J. K., Wagner, R. K., Rashotte, C. A., Rose, E., Lindamood, P., & Conway, T. (1999). Preventing reading failure in young children with phonological processing difficulties: Group and individual responses to instruction. Journal of Educational Psychology, 91(4), 579–593. http://dx.doi.org/10.1037/0022-0663.91.4.579
    Urdegar, S. (1998). Evaluation of the Success for All Programs, 1997–1998. Unpublished report. Miami Public Schools, Miami, FL.
    Urdegar, S. (1999, August 10). Success for All is unethical. Letter to the Editor. Wall Street Journal, 21A
    Urquiola, M. (2000). Identifying class size effects in developing countries: Evidence from rural schools in Bolivia. Unpublished manuscript, Cornell University.
    U.S. Department of Education. (2002, February 6). The use of scientifically based research in education (A Working Group Conference). Washington, DC: Author.
    Venezky, R. (1998). An alternative perspective on Success for All. In K. K.Wong (ed.), Advances in educational policy: Perspectives on the social functions of school (Vol. 4, pp. 57–78). Greenwich, CT: JAI.
    Viadero, D. (2001, January 10). Panel to define scientific rigor in schools research. Education Week, 16.
    Viadero, D. (2002, April 3) Campbell Collaboration seeks to firm up ‘soft sciences.’Education Week, 8.
    Walker, M. H. (1996). What research really says. Principal, 74(4), 41.
    West, E. G. (1967). Tom Paine's voucher scheme for public education. Southern Economic Journal, 33, 378–382. http://dx.doi.org/10.2307/1055119
    “What Works” Clearinghouse. (2002). “What works” clearinghouse. Retrieved May 28, 2002, from http://www.whitehouse.gov/infocus/education/teachers/sect-4.pdf
    Witte, J. F. (1992). Private versus public school achievement: Are there findings that should affect the educational choice debate?Economics of Education Review, 11(4), 371–394. http://dx.doi.org/10.1016/0272-7757%2892%2990043-3
    Witte, J. F. (1998). The Milwaukee voucher experiment. Educational Evaluation and Policy Analysis, 20(4), 229–251.
    Wolcott, H. F. (1973). The man in the principal's office: An ethnography. New York: Holt, Rinehart & Winston.
    Wood, E., Wolosyn, V. E., & Willoughby, T. (Eds.). (1995). Cognitive strategy instruction for middle and high schools. Cambridge, MA: Brookline.
    Wooden, J. (with Jamison, S.). (1997). Wooden: A lifetime of observations and reflections on and off the court. Chicago: Contemporary Books.
    Zahorik, J., Molnar, A., Ehrle, K., & Halbach, A. (2000). Smaller classes, better teaching? Effective teaching in reduced-size classes. In S. W. M.Laine & J. G.Ward (Eds.), Using what we know: A review of the research on implementing class-size reduction initiatives for state and local policymakers (pp. 53–73). Oak Brook, IL: North Central Regional Educational Laboratory.
    Zeichner, K. M., & Gore, J. M. (1995). Using action research as a vehicle for student teacher reflection: A social reconstructionist approach. In S. E.Noffke & R. B.Stevenson (Eds.), Educational action research: Becoming practically critical (pp. 13–30). New York: Teachers College Press.
    Zemelman, S., Daniels, H., & Hyde, A. (1998). Best practices: New standards for teaching and learning in America's schools. Portsmouth, NH: Heinemann.

    • Loading...
Back to Top

Copy and paste the following HTML into your website