The SAGE International Handbook of Educational Evaluation
Publication Year: 2009
Addresses methods and applications in the field, particularly as they relate to policy- and decision-making in an era of globalization.
- Front Matter
- Back Matter
- Subject Index
- Part I. The Educational Evaluation Context
- 1. Globalization and Policy Research in Education
- 2. Globalizing Influences on the Western Evaluation Imaginary
- 3. Fundamental Evaluation Issues in a Global Society
- Part II. The Role of Science in Educational Evaluation
- 4. Evaluation, Method Choices, and Pathways to Consequences: Trying to Make Sense of How Evaluation Can Contribute to Sensemaking
- 5. Randomized Experiments and Quasi-Experimental Designs in Educational Research
- 6. Enhancing Impact Evidence on How Global Education Initiatives Work: Theory, Epistemological Foundations, and Principles for Applying Multiphase, Mixed Method Designs
- 7. The Evaluation of the Georgia Pre-K Program: An Example of a Scientific Evaluation of an Early Education Program
- 8. Globalization—Blessing and Bane in Empirical Evaluation: Lessons in Three Acts
- 9. Science-Based Educational Evaluation and Student Learning in a Global Society: A Critical Appraisal
- Part III. Educational Evaluation, Capacity Building, and Monitoring
- 10. Evaluation, Accountability, and Performance Measurement in National Education Systems: Trends, Methods, and Issues
- 11. A Precarious Balance: Educational Evaluation Capacity Building in a Globalized Society
- 12. International Assessments and Indicators: How Will Assessments and Performance Indicators Improve Educational Policies and Practices in a Globalized Society?
- 13. Exemplary Case: Implementing Large-Scale Assessment of Education in Mexico
- 14. Inquiry-Minded District Leaders: Evaluation as Inquiry, Inquiry as Practice
- 15. Where Global Meets Local: Contexts, Constraints, and Consensus in School Evaluation in Ireland
- 16. Accountability and Capacity Building: Can They Live Together?
- Part IV. Educational Evaluation as Learning and Discovery
- 17. Learning-Oriented Educational Evaluation in Contemporary Society
- 18. Meaningfully Engaging With Difference Through Mixed Methods Educational Evaluation
- 19. Case Study Methods in Educational Evaluation
- 20. Developing a Community of Practice: Learning and Transformation Through Evaluation
- 21. Learning-in-(Inter)action: A Dialogical Turn to Evaluation and Learning
- 22. Educational Evaluation as Mediated Mutual Learning
- Part V. Educational Evaluation in a Political World
- 23. Own Goals: Democracy, Evaluation, and Rights in Millennium Projects
- 24. Reclaiming Knowledge at the Margins: Culturally Responsive Evaluation in the Current Evaluation Moment
- 25. Evaluation for and by Navajos: A Narrative Case of the Irrelevance of Globalization
- 26. Dialogue, Deliberation, and Democracy in Educational Evaluation: Theoretical Arguments and a Case Narrative
- 27. Pursuing the Wrong Indicators? The Development and Impact of Test-Based Accountability
- Part VI. Educational Evaluation: Opportunities and New Dilemmas
- 28. Evaluation and Educational Policymaking
- 29. Technology and Educational Evaluation
- 30. Serving the Public Interest Through Educational Evaluation: Salvaging Democracy by Rejecting Neoliberalism
- 31. Dilemmas for Educational Evaluation in a Globalized Society
Copyright © 2009 by SAGE Publications, Inc.
All rights reserved. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher.
SAGE Publications, Inc.
2455 Teller Road
Thousand Oaks, California 91320
SAGE Publications Ltd.
1 Oliver's Yard
55 City Road
London EC1Y 1SP
SAGE Publications India Pvt. Ltd.
B 1/I 1 Mohan Cooperative Industrial Area
Mathura Road, New Delhi 110 044
SAGE Publications Asia-Pacific Pte Ltd
33 Pekin Street #02-01
Far East Square
Printed in the United States of America
Library of Congress Cataloging-in-Publication Data
The SAGE international handbook of educational evaluation/Katherine
Ryan, J. Bradley Cousins.
Includes bibliographical references and index.
ISBN 978-1-4129-4068-9 (cloth : alk. paper)
1. Educational evaluation—Handbooks, manuals, etc. I. Cousins, J. Bradley. II. Title.
LB2822.75. R9 2009
This book is printed on acid-free paper.
09 10 11 12 13 10 9 8 7 6 5 4 3 2 1
Acquisitions Editor: Vicki Knight
Associate Editor: Sean Connelly
Editorial Assistant: Lauren Habib
Production Editor: Catherine M. Chilton
Copy Editor: Heather Jefferson
Typesetter: C&M Digitals (P) Ltd.
Proofreader: Annette R. Van Deusen
Indexer: Diggs Publication Services
Cover Designer: Glenn Vogel
Marketing Manager: Stephanie Adams
Introduction[Page ix]andVolume Overview
Educational evaluation is at the same time similar and different from evaluation in other domains of human service practice (e.g., health, justice, social service). Moreover, educational evaluation is subject to similar policy and governance structures as might be found across different human service practice domains. Yet educational evaluation is unique in remarkable ways. For instance, the evaluation and assessment of student progress toward valued goals is an integral part of the core business of education—teaching and learning. A corollary is that there exists in education a longstanding tradition of psychometric testing—predominantly achievement testing—that is unparalleled in other domains of human service practice.
The goal of all educational evaluation is to enable programs and policies to improve student learning. There are longstanding tensions resulting in dialogue and discussion about what kinds of educational evaluation families or genres (e.g., science-based approaches involving experimental methods, participatory approaches) are best for accomplishing this goal (Campbell & Stanley, 1963; Cook, 2002; Cronbach et al., 1980; Eisner, 1994; Guba, 1969; Stake, 1984). In contemporary society, broader forces such as globalization1 are seen as a major factor in the enactment of international and national policies for educational evaluation, assessment, and testing, as well as curriculum and instruction (Burbules & Torres, 2000; Gardner, 2004).
Globalization, to be sure, is a contested and contentious term, one that carries a variety of meanings for different people. The emergence of new public management (NPM) as the dominant governance paradigm concerning educational policy and practice and neoliberalism as the overarching political theory in contemporary governance are central constructs. The globalization notion is intended to capture the political, economic, and social forces that have converged over the past 30 years. There are different views about what globalization refers to, including the impact of global economic processes (e.g., production, consumption), the decline in the nation-state system, the emergence of new global media and information technologies [Page x]that permit the circulation of ideas, resources, and even individuals across boundaries, and the decline of local traditions and values (Burbules & Torres, 2000; Rizvi, 2004; Stein, 2001).
Globalization is characterized by values such as efficiency, entrepreneurship, market-based reform, rational management, and performance-based accountability (Burbules, 2002). At the same time, the rise of neoliberal ideologies2 has taken place. With neoliberalism, the state role is to create an appropriate market and one that creates an individual that is enterprising and entrepreneurial (Biesta, 2004). In contrast, within a liberal democracy, the government is constituted as delegated, institutional power structures administering the interests of society. A neoliberal democracy that focuses on appropriate markets aimed at creating enterprising and entrepreneurial individuals signals different arrangements.Globalization and Education
Globalization is demanding more of education as markets have shifted from industrial production to one of service, with information technology receiving more attention (Gardner, 2004; Stein, 2001; Teachers College Annual Report, 2004). Within the knowledge-based economy, intellectual resources and knowledge, instead of natural resources and industrial labor, are critical assets for continuing economic growth. Educational evaluation is playing a key role in this shift to a knowledge-based society, which demands that students be educated for the new world order to remain competitive in the global economy.
Nations now vie for highly competitive positions within the global marketplace (Anderson, 2005). As a consequence, governments are paying increasing attention to the performance of their educational systems—the outcomes of education. Although student learning continues to be paramount, part of this changing contemporary society involves extending learning beyond traditional boundaries to the notion of life-long learning. The life-long learning framework includes learning throughout the life course from birth onward in a wide variety of learning environments, such as formal, nonformal, and informal pathways.3 Improved student learning is therefore the raison d'être for all educational evaluation, with the understanding that “Who is the student?” is changing.
Further, there is a fundamental tension between improving educational achievement and the amount and kinds of resources available for realizing these improvements. Historically, increased demands on education have been supported with increases in resources. However, a defining feature of globalization is the commitment to the notion that growth can be achieved through increased production and efficiency (Lundgren, 2003; Stein, 2001). At the same time, the economic gap between the rich and the poor has increased within the United States and globally. There are also persistent educational achievement gaps between low-income, racial, ethnic, and linguistic minority students and their peers (Baker & LeTendre, 2005; National Center for Educational Statistics, 2005; Organisation for Economic Co-operation and Development, 2003).[Page xi]
Concerns about quality and the resources directed to education are increasing demands for information about school performance. These demands are being addressed with the (a) implementation of performance measurement systems (e.g., high-stakes educational tests) and (b) development of science-based research and evidence-based policy that is intended to distinguish between “what works” from what does not in improving education. Together, performance-based, accountability-based NPM4 (Behn, 2001), and evidence-based policy (EBP) result in a potentially powerful mechanism for steering educational policy.Performance Measurement Systems
According to Nevo (2002), external educational evaluations were always intended as a form of control. Today, the steering of educational policy is reflected in increasing governmental oversight and control of curriculum, instruction, pedagogy, and evaluation in the international arena. For instance, Argentina, Chile, other Latin American countries, France, Australia, England, the United States, and others have implemented large-scale assessments to monitor school quality. Test results are used for such diverse purposes as providing resources to or to publicize poorly performing schools, identify school system inefficiencies, and assess the extent to which students learned the prescribed curriculum and others.
Large-scale assessment use in the United States illustrates this kind of policy steering. For example, the 2001 No Child Left Behind (2002) legislation institutionalizes the reliance on high-stakes tests (performance indicators) as a key mechanism for improving student achievement reflecting NPM (Behn, 2001). When schools and districts do not show adequate annual yearly progress (AYP)5 for more than 2 years, a summative evaluative judgment is triggered—a school is judged as “unsuccessful” with no other information. Instead, students then may legally enroll in another school, a policy reflecting a free market approach to improving student learning.
Student achievement is also ranked and compared on international assessments such as the Third International Mathematics Science and Math Study (TIMSS) and Programme for International Student Assessment (PISA), which are used to examine cross-curricular competences (Stronach, 1999; Stronach, Halsall, & Hustler, 2002). These cross-country assessment comparisons are linked to economic performance in these countries in the global economy (Stronach et al., 2002). Quantitative performance indicators have come to represent and communicate quality and quantity of education (Carnoy, 1999). There is an assumption that increases from this kind of quantitative measurement represent more and better education. This is essentially an efficiency model that uses indicators of productivity (gains in achievement test scores) to represent increased school productivity. The Organisation for Economic Co-operation and Development, the International Education Association, and the National Center for Educational Statistics have contributed to developing this view through their respective policy efforts.[Page xii]“What Works”
In addition, within No Child Left Behind (2002), scientifically based research is defined as “rigorous, systematic, and objective procedures to obtain valid knowledge that is evaluated using experimental or quasi-experimental design.” The U.S. Coalition for Evidence-Based Policy (EBP) defines evidence-based policy as based on research that has been proven effective by randomized controlled experiments replicated on a large scale (Coalition for Evidence-Based Policy, 2002). There are many brokerage agencies that have been established worldwide aimed at building capacity to produce EBP. The agencies (a) establish criteria for conducting and evaluating science-based educational research, and (b) house a database of “what works.” Examples of national and international brokerage agencies include Evidence for Policy and Practice Information and Co-ordinating (Centre United Kingdom) (http://eppi.ioe.ac.uk), Iterative Best Evidence Synthesis Programme (New Zealand) (http://www.minedu.govt.nz), What Works Clearinghouse (USA) (http://www.whatworks.ed.gov), and the Campbell Collaboration (http://www.campbellcollaboration.org), an international nonprofit organization. In effect, evidence-based policy changes what kind of evidence matters for determining educational program effectiveness.Educational Evaluation for Learning and Accountability?
At the same time, the notion of evaluation as learning is becoming increasingly attractive with both proponents of educational evaluation for improvement and educational evaluation for accountability staking claims to this notion. Single, standalone educational evaluations reflecting responsive evaluation roots aimed at learning and discovery at the local program level are under some duress—either being supplemented or supplanted by outcome indicators (Dahler-Larsen, 2006; Mayne & Rist, 2006). Meanwhile, despite known conceptual complexities (e.g., divergent evaluation purposes), performance monitoring is identified as one evaluation approach aimed at practice improvement and organization learning (Rogers & Williams, 2006). Although performance monitoring's historical role in holding individuals and organizations accountable is recognized, performance monitoring is becoming known as a means to improve programs, services, and practices, effectively blurring foundational evaluation purposes.Globalizing Evaluation
Educational evaluation, in this sense, is being “globalized.” Evaluation theories and practices are being influenced and are influenced by the movement of ideas across national boundaries. How and the extent to which educational evaluation theories and practices are changing in response to globalization and other political and social changes has not been examined. In the literature, the ideology of globalization is often portrayed as an inevitable consequence of market forces that results [Page xiii]in a top-down homogenization effecting social, political, and cultural changes (Carnoy, 1999; Lingard, 2000). Nevertheless, globalization scale (e.g., global, regional, national, state, and local levels) effects are just beginning to be considered. For instance, how globalization effects influence or interact with the local level and the relationships between the local and global are not well understood. (This is also the case for other levels, such as regional, state, etc.). In the educational evaluation context, the extent to which globalization effects influence how educational evaluation theories and practices are instantiated locally or at other levels is not clear. Local and national politics and social relations can mediate or moderate the effects of globalization on the local or other levels (Lingard, 2000). Politics, local histories, and cultures will influence how these educational evaluation approaches actually play out at the local and other levels.Volume Aims and Perspectives
The volume aim is to address the challenges, tensions, and issues within and across educational evaluation genres in response to an increasingly globalized society. Contributors from various theoretical and practice perspectives examined whether and how educational evaluation is being redefined by the changing circumstances of globalization. How to address the challenges, tensions, and issues within and across educational evaluation perspectives in response to an increasingly globalized society was considered.
We examined these globalization effects vis-à-vis a comparative examination of various educational evaluation genres or families. Although there are many possibilities, we conceptualize the genres or families as follows. The role of science in educational evaluation includes approaches emphasizing science characterized by measurement and design. Educational evaluation, capacity building, and monitoring involve educational evaluation theories and practices that assume improved organizations, management, and programs will improve student learning. The third genre, educational evaluation as learning reflects educational evaluation theories and practices that emphasize attention to understanding of program contexts in general and from stakeholders' views, stakeholder participation in evaluation, and value pluralism with a preference for qualitative or mixed methods. Educational evaluation in a political world incorporates educational evaluation (a) oriented around a set of values or ideology and (b) presupposes educational programs and what works is best understood in relationship to the political currents that influence them.
This kind of comparative analysis of evaluation families or genres is one approach to understanding evaluation (e.g., Cook, 2002; Greene, 1994; House, 1978; Shadish, Leviton, & Cook, 1991). We make no special claim that representing the field according to these specific families of evaluation theory and practice is authoritative in describing evaluation as a domain of inquiry. Further, we acknowledge that there are a variety of perspectives that could be utilized as a framework for this kind of project. As one critical friend notes, examining specific educational practices such as evaluation in higher education, the evaluation of school effectiveness, and the like could serve as a framework for this kind of endeavor that would yield an interesting [Page xiv]set of chapters. There are other important educational evaluation issues such as classroom assessment and evaluation and teacher evaluation beyond the scope of the Handbook. Rather, our intentional focus was on the evaluation of educational programs, policies, organizations, or systems.
Yet we do assert that this framework adequately captures the range of evaluation perspectives that exist in contemporary society. Despite spillage from one family to the next, the categories are sufficiently meaningful and discernable as to warrant their use as an organizing structure. On this point, our editorial board agreed.
At this juncture, we would like to acknowledge our wise and wonderful international editorial board composed of acclaimed educational evaluation scholars. The Handbook editorial board members were tireless in providing excellent advice in all matters of the volume. They served in several capacities involving a variety of tasks, including peer reviews for the chapters, critical and generous feedback on the volume framework, identification and recruitment of chapter authors, and guidance about critical topics and controversies. In particular, there were significant efforts devoted to recruiting educational evaluation scholars across the world to prepare chapters for this volume. We achieved only modest success in this area even with help from the Handbook editorial board and board members from evaluation societies. Although we are delighted to have successfully recruited chapter authors from Australia, Europe, Middle East, North America, and Latin America, we know there are other notable educational evaluation perspectives that would have enhanced the volume.Volume Organization
This volume is organized in six parts. In addition to parts structured around the educational evaluation genres, we included an introductory part on the context for educational evaluation (Part I) and a concluding part on opportunities and new dilemmas (Part VI). Part I articulates the volume framework describing the current educational evaluation context, including globalization definitions and changing educational policies, foundations, and historic evaluation dilemmas.
The chapters in Parts II through V of the volume each takes up a particular educational evaluation perspective or genre (e.g., educational evaluation as science) in relationship to the changing circumstances of globalization. Within genre parts, we recruited authors to write chapters about either theory associated with the genre, methods specific to it, or exemplary case exemplars of work in the area. In each part, two evaluation theory and method chapters, representing the particular genre, are examined. The theories and their respective methodologies are defined and then examined in relationship to the questions focusing the volume. Both the theory and methods chapters are intended to illustrate a defining educational evaluation genre feature or tension (e.g., mixed methods), rather than be exhaustive. The case study chapters illustrate how the respective theories translate to practice in a globalized society, including, for example, what counts as knowledge, stakeholder representation, and how student achievement is represented. In some cases, exemplars were positive manifestations or success stories, whereas other revealed challenges, unintended processes or consequences, and/or practical departures from planned [Page xv]directions. The critical appraisal chapter in each part describes how the genre contributes to improving educational policy and practices by providing a critical analysis of the merits and shortcomings of this approach for shaping educational policy and serving educational public interests in a globalized society. The chapters in Part VI focus on educational evaluation issues in contemporary society, including educational evaluation and educational policy, the effects of technology, and serving the public interests. The final chapter is by the editors and the focus is on continuing educational evaluation dilemmas (e.g., evaluator role). In this final chapter, we endeavored to look across the many contributions in the handbook with an eye to synthesis and integration about educational evaluation in a globalized society.Notes
1. We acknowledge that globalization is a contested term. For the purposes of this book, we incorporate multiple views of globalization, including (a) the integration of markets and production through large multinational corporations based on the notion of efficiency (Burbules & Torres, 2000; Stein, 2001); (b) the transmission of products, technology, ideas, and cultures across national boundaries (Burbules & Torres, 2000; Suarez-Orozco & Qin-Hilliard, 2004); and (c) others.
2. How or whether these ideological shifts are intertwined with these economic changes is the subject of substantial discussion and debate (Biesta, 2004; Burbules & Torres, 2000; Carnoy, 1999).
3. Retrieved from http://184.108.40.206/search?q=cache:KT02QVGARtcJ:www1.worldbank.org/education/lifelong_learning/ World Bank, 2005.
4. NPM is an array of strategies designed to regulate individuals and organizations through auditable performance standards (Power, 1997). These standards are aimed at increasing performance and to make these kinds of improvements transparent and public. Performance is represented by reductions, efficiency, and effectiveness.
5. AYP is (a) the percentage of reading and math scores that meet or exceed standards, compared with the annual state targets; and (b) the participation rate of students in taking the state tests, which must meet or exceed 95%.References[Page xviii](2005).Accountability in education.Paris: UNESCO, International Institute for Educational Planning.(2005).National differences, global similarities: World culture and the future of schooling.Stanford, CA: Stanford Social Sciences Press.(2001)Rethinking democratic accountability.Washington, DC: Brookings Institution Press.Education, accountability, and the ethical demand: Can the democratic potential of accountability be regained?Educational Theory(2004).,54(3),233–250.(2002).The global context of educational research. In L.BreslerA.Ardichvili (Eds.), Research in international education: Experience, theory, and practice (pp. 157–170).New York: Peter Lang.Burbules, N. C., & Torres, C. A. (Eds.). (2000).Globalization and education: Critical perspectives.New York: Routledge.[Page xvi](1963).Experimental and quasi-experimental designs for research.Chicago: Rand-McNally.(1999).Globalization and educational reform: What planners need to know.Paris: UNESCO, International Institute for Educational Planning.Coalition for Evidence-Based Policy.(2002, November).Bringing evidence-driven progress to education: A recommended strategy for the U.S. Department of Education. Available at http://www.excelgov.org/usermedia/images/uploads/PDFs/coalitionFinRpt.pdfRandomized experiments in educational policy research: A critical examination of the reasons the educational evaluation community has offered for not doing them.Educational Evaluation and Policy Analysis(2002).,24(3),175–199.(1980).Toward a reform of program evaluation: Aims, methods, and institutional arrangements.San Francisco, CA: Jossey-Bass.(2006).Evaluation after disenchantment? Five issues shaping the role of evaluation in society. In I.Shaw, M. M.Mark, & J.Greene (Eds.), The Sage handbook of evaluation (pp. 141–160).Thousand Oaks, CA: Sage.(1994).The educational imagination (3rd ed.). Upper Saddle River, NJ: Prentice-Hall.(2004).How science changes: Considerations of history, science, and values. In M.Suarez-OrozcoD. M.Qin-Hillard (Eds.), Globalization: Culture and education in the new millennium (pp. 235–258).Berkeley: University of California Press.(1994).Qualitative program evaluation: Practice and promise. In N.DenzinY.Lincoln (Eds.), Handbook of qualitative inquiry (1st ed., pp. 530–544).Thousand Oaks, CA: Sage.The failure of educational evaluation.Educational Technology(1969).,9(5),29–38.Assumptions underlying evaluation models.Educational Researcher(1978).,1(3),4–12.(2000).It is and it isn't: Vernacular globalization, educational policy, and restructuring. In N.BurbulesC.Torres (Eds.), Globalization and education: Critical perspectives (pp. 125–134).New York: Routledge.(2003).The political governing (governance) of education and evaluation. In P.HaugT. A.Schwandt (Eds.), Evaluating educational reforms: Scandinavia perspectives (pp. 99–110).Greenwich, CT: InfoAge.Studies are not enough: The necessary transformation of evaluation.Canadian Journal of Program Evaluation(2006).,21(3),93–120.National Center for Educational Statistics.(2005).The condition of education, 2005 (NCES Publication 2005–094).Washington, DC: U.S. Government Printing Office.(2002).Dialogue evaluation: Combining internal and external evaluation. In D.Nevo (Ed.), School-based evaluation: An international perspective (pp. 3–16).Kidlington, Oxford: Elsevier Science.No Child Left Behind Act of 2001 (2002). Pub. L. No. 107th Cong., 110 Cong. Rec. 1425. 115 Stat. Organisation for Economic Co-operation and Development. (2003). Where immigrant students succeed: A comparative review of performance and engagement.Paris: Author.(1997).The audit society.New York: Oxford University Press.(2004, September).Higher education from a global perspective. Paper presented at the Higher Education Collaborative, University of Illinois, Urbana, IL.(2006).Evaluation for practice improvement and organizational learning. In I.Shaw, M. M.Mark, & J.Greene (Eds.), The Sage handbook of evaluation (pp. 76–97).Thousand Oaks, CA: Sage.[Page xvii]Serving the public interests in educational accountability.American Journal of Evaluation(2004).,25(4),443–460.(1991).Foundations of program evaluation: Theories of practice.Thousand Oaks, CA: Sage.(1984).Program evaluation, particularly responsive evaluation. In G. F.Madaus, M.Scriven, & D. L.Stufflebeam (Eds.), Evaluation models (pp. 287–310).Boston: Kluwer-Nijhoff.(2001).The cult of efficiency.Toronto, ON: House of Anansi Press, Ltd.Shouting theatre in a crowded fire: Educational effectiveness as cultural performance.Evaluation(1999).,5(2),173–193.(2002).Future imperfect: Evaluation in dystopian times. In K.RyanT.Schwandt (Eds.), Exploring evaluator role and identity (pp. 167–192).Greenwich, CT: InfoAge.(2004).Globalization: Culture and education in the new millennium.Berkeley: University of California Press.Teachers College Annual Report.(2004).New rules, old responses. Retrieved October 1, 2004, from http://www.tc.columbia.edu/news/article.htm?id=4741
We had the great and good fortune to be advised by many individuals and groups who generously provided wise counsel and thoughtful advice throughout all phases of this Handbook. As we mentioned earlier, first and foremost, our authors and editorial board individually and collectively freely shared vital intellectual capital that is woven into all the chapters, sections, and the overall project. Students in our evaluation courses provided helpful feedback as they read the project prospectus and chapter drafts taking up the issues addressed in the handbook in their own intellectual projects. We thank them for their excitement about the project and their thoughtful reflections.
With unfailing good humor and efficiency, Nora Gannon, one of our students, provided support throughout the project, including the coordination of e-mail invitations, file management, author timetables, and many other matters. We are deeply grateful for her help. We also acknowledge our wonderful publishers and editors, Lisa Cuevas Shaw, Vicki Knight, Sean Connelly, the Sage editorial support and production team Catherine Chilton and Heather Jefferson, and many others. They managed to be patient, supportive, and encouraging while urging us to stay close to the timelines—a daunting task to be sure.
Further, we are grateful to our respective departments, colleges, and universities—the Department of Educational Psychology in the College of Education at the University of Illinois and the Faculty of Education at the University of Ottawa. We are fortunate indeed to be in the kind of academic environment that is supportive of this kind of intellectual endeavor. We extend a special thank you to our partners, Norman Denzin and Danielle Delorme, who listened to what we had to say, encouraged us to stay the course, and were so helpful in a variety of other ways. We greatly appreciate their unstinting support and patience.
Sage Publications would like to thank the following reviewers:Austin Independent School District,University of Colorado, [Page xx]University of Illinois at Urbana-Champaign,Alverno College,University of Kentucky,
About the Editors[Page 577]
Katherine E. Ryan is a faculty member in the Educational Psychology Department at the University of Illinois in Urbana-Champaign (UIUC). After receiving a PhD in 1988, she worked as an evaluator for a decade before joining the UIUC faculty in 1999. Her research interests focus on educational evaluation and the intersection of educational accountability issues and high-stakes assessment. She has served as Associate Editor for the American Journal of Evaluation and New Directions for Evaluation. Her work has examined both evaluative capacity building and monitoring issues involved in test-based educational accountability. Her current research includes an evaluation of the intended and unintended consequences of a state-wide assessment and accountability system in relationship to students, instruction, and educational outcomes.
J. Bradley Cousins is Professor of Educational Administration at the Faculty of Education, University of Ottawa. Cousins' main interests in program evaluation include participatory and collaborative approaches, use, and capacity building. He received his PhD in educational measurement and evaluation from the University of Toronto in 1988. Throughout his career, he has received several awards for his work in evaluation, including the Contribution to Evaluation in Canada award (CES, 1999) and the Karl Boudreau award for leadership in evaluation (CES-NCC, 2007) and the Paul F. Lazarsfeld award for theory in evaluation (AEA, 2008). He has been Editor of the Canadian Journal of Program Evaluation since January 2002.
About the Contributors[Page 579]
Tineke A. Abma is Associate Professor and Program Leader of “Autonomy and Participation in Chronic Care” at VU Medical Centre, EMGO Institute, Department of Medical Humanities, Amsterdam. Her scholarly work concentrates on participatory and responsive evaluation approaches, dialogue and moral deliberation, narrative and storytelling, and patient participation in health research. She has conducted many evaluation projects in the fields of healthcare (psychiatry, elderly care, intellectual disabilities, rehabilitative medicine, palliative care), social welfare, and higher education.
Raymond J. Adams, BSc (Hons), DipEd, MEd(Melb), PhD (Chicago), FACE, is Professorial Fellow of the University of Melbourne and an independent consultant specializing in psychometrics, educational statistics, large-scale testing, and international comparative studies. He has led the OECD PISA Programme since its inception. Ray has published widely on the technical aspects of educational measurement, and his item response modeling software packages are among the most widely used in educational and psychological measurement. He has served as chair of the technical advisory committee for the International Association for the Evaluation of Educational Achievement and as Head of Measurement at the Australian Council for Educational Research.
Debra D. Bragg is Professor of the Department of Educational Organization and Leadership in the College of Education at the University of Illinois. She is responsible for coordinating the College of Education's Higher Education and Community College Executive Leadership programs, and she is the principal investigator for research and evaluation studies funded by the U.S. Department of Education, state agencies, and the Lumina Foundation for Education. Her research focuses on P-16 policy issues, with special interest in high school-to-college transition and various policies and practices focused on addressing the educational needs of underserved students.
Madhabi Chatterji, PhD, is Associate Professor of Measurement and Evaluation and Codirector of the Assessment and Evaluation Research Initiative, Teachers College, Columbia University. Her research, currently focusing on diagnostic classroom assessment, evidence standards, and educational equity, has been recognized by the Fulbright Commission (2007–2008), the American Educational Research Association (2004), and the Florida Educational Research Association (1993). Refereed publications have appeared in the American Journal of Evaluation, Journal of Educational Psychology, Review of Educational Research, Educational and Psychological Measurement, and Educational Researcher. Her book, Designing and Using Tools for Educational Assessment (Allyn & Bacon, 2003) offers an integrated model for designing and validating measures accounting for user contexts; the model is being applied [Page 580]to develop a national assessment for graduate medical education programs. She is presently serving on an evidence frameworks committee at the Institute of Medicine of the National Academies.
Christina A. Christie, PhD, is Associate Professor in the School of Behavioral and Organizational Sciences at Claremont Graduate University. Christie cofounded the Southern California Evaluation Association, is former Chair of the Theories of Evaluation Division, and is current Chair of the Research on Evaluation Division of the American Evaluation Association. She received the 2004 American Evaluation Association's Marcia Guttentag Early Career Achievement Award. She is also Section Editor of the American Journal of Evaluation and Editor of two recent books: Exemplars of Evaluation Practice (with Fitzpatrick & Mark) and What Counts as Credible Evidence in Evaluation and Applied Research? (with Donaldson & Mark).
Edith J. Cisneros-Cohernour, PhD, is Professor and Research Coordinator of the College of Education at the Universidad Autónoma de Yucatan, Mexico. A former Fulbright fellow, she received her PhD in Higher Education Administration and Evaluation from the University of Illinois at Urbana-Champaign in 2001. From 1994 to 2001, she was also affiliated with the National Transition Alliance for Youth with Disabilities and the Center for Instructional Research and Curriculum Evaluation of the University of Illinois at Urbana-Champaign. Her areas of research interest are evaluation, professional development, organizational learning, and the ethical aspects of research and evaluation. Among her recent publications are Academic Freedom, Tenure, and Student Evaluations of Faculty: Galloping Polls in the 21st Century (2005), Validity and Evaluations of Teaching in Higher Education Institutions Under Positivistic Paradigm (2005), and An Interpretive Proposal for the Evaluation of Teaching in Higher Education (2008). Further, she has made numerous presentations of her work at professional conferences in México, the United States, and Europe.
Thomas D. Cook is the Sarepta and Joan Harrison Professor of Ethics and Justice at Northwestern University where he is also a professor in the Departments of Sociology, Psychology, Education, and Social Policy as well as being a Faculty Fellow of the Institute for Policy Research. His professional interests are in research methodology, particularly methods that can be applied to the evaluation of social programs in education and health. He is a Fellow of the American Academy of Arts and Sciences and has won prizes from many professional organizations, most recently the Sells Prize of the Society for Multivariate Experimental Psychology.
Peter Dahler-Larsen, PhD, is Professor of Evaluation at the Department of Political Science and Public Management, University of Southern Denmark, where he is coordinating the Master Program in Evaluation. His main research interests include cultural, sociological, and institutional perspectives on evaluation. His publications include contributions to The Sage Handbook of Evaluation and The Oxford Handbook of Public Management. With Jonathan Breul and Richard Boyle, he co-edited “Open to the Public. Evaluation in the Public Arena”(Transaction 2008). He has also published extensively in Danish on evaluation and the concept of quality. He was President of the European Evaluation Society 2006–2007.
Lois-ellin Datta, PhD, Comparative and Physiological Psychology, has been a National Institutes of Health Fellow, National Director of Head Start Evaluation, National Institute of Education Director for Teaching, Learning, and Assessment, and General Accountability Office Director for Evaluation in Human Services. A Past-President of the American Evaluation Association (ERS) and Editor-in-Chief of New Directions for Evaluation, she is an editorial board member of the American Journal of Evaluation, New Directions for Evaluation, and the Journal of MultiDisciplinary Evaluation. Recipient [Page 581]of both Myrdal and Ingle Awards, Datta has written more than 100 articles, chapters, and books on evaluation.
John Elliott is Emeritus Professor of Education in the Centre for Applied Research in Education at the University of East Anglia, UK. He is well known internationally for his roles in developing, in the education field, the theory and practice of action research, and the development of democratic approaches to programme evaluation. UEA awarded him a DLitt degree for his published work (2003), and he has received Doctorates, honoris causa, from the Hong Kong Institute of Education (2002) and the Autonomous University of Barcelona (2003). His selected works, titled “Reflecting Where the Action Is,” are published in the Routledge World Library of Educationalists (2007).
Irwin Feller is Senior Visiting Scientist at the American Association for the Advancement of Science. He is also Emeritus Professor of Economics at The Pennsylvania State University, where he served on the faculty for 39 years, including 24 years as Director of the Institute for Policy Research and Evaluation. His current research interests include the economics of science and technology, the evaluation of federal and state technology programs, the university's role in technology-based economic development, and the adoption and impacts of performance measurement systems. He has a BBA in economics from the City University of New York and a PhD in economics from the University of Minnesota.
Thomas E. Grayson, PhD, is Director of Evaluation and Assessment in the Office of the Vice Chancellor for Student Affairs and adjunct professor in the College of Education at the University of Illinois at Urbana-Champaign. Dr. Grayson's expertise is in program evaluation with an emphasis on strategies for conducting performance-based assessment. He helps organizations build their capacity to conceptualize and implement evaluations that enable them to strengthen their programs and services. He has published articles and written book chapters on educational policy and practice regarding youth at risk of school failure and individuals with learning disabilities. His publications also include areas on concept mapping technology and appreciative inquiry. Further, he has made numerous presentations on evaluation policy and practice at professional conferences and training seminars.
Jennifer C. Greene has been an evaluation scholar-practitioner for more than 30 years and is currently Professor of Educational Psychology at the University of Illinois at Urbana-Champaign. Her evaluation scholarship focuses on analyzing the intersections of social science method with policy discourse and program decision making, with the intent of making evaluation useful and socially responsible. Greene has concentrated on advancing qualitative, mixed methods, and democratic approaches to evaluation. Her evaluation practice has spanned multiple domains of practice, including education, community-based family services, and youth development. In 2003, Greene received the American Evaluation Association's Lazarsfeld award for contributions to evaluation theory.
Gary T. Henry holds the Duncan MacRae '09 and Rebecca Kyle MacRae Professorship of Public Policy in the Department of Public Policy and directs the Carolina Institute for Public Policy at the University of North Carolina at Chapel Hill. Also, he holds the appointment as Senior Fellow in the Frank Porter Graham Institute for Child Development at UNC-Chapel Hill. Henry has evaluated a variety of policies and programs, including North Carolina's Disadvantaged Student Supplemental Fund, Georgia's Universal Pre-K, public information campaigns, and the HOPE Scholarship, and published extensively in the fields of evaluation, policy research, and education policy.
Stafford Hood is the Sheila M. Miller Professor of Education and Head of the Department of Curriculum and Instruction at the University of Illinois at Urbana-Champaign, where he also holds [Page 582]an appointment as Professor of Educational Psychology in the College of Education. His research and scholarly activities focus primarily on the role of culture in educational assessment and culturally responsive approaches in program evaluation. He has also served as a program evaluation and testing consultant internationally and in the U.S. to the federal government, state departments of education, school districts, universities, foundations, and regional educational laboratories. He was selected as a Fellow of the American Council on Education in 2001.
Rodney K. Hopson holds the Hillman Distinguished Professorship in the Department of Educational Foundations and Leadership, School of Education, Duquesne University. With postdoctoral and visiting research and teaching experiences from the Johns Hopkins Bloomberg School of Hygiene and Public Health, the University of Namibia Faculty of Education, and Cambridge University Centre of African Studies, his general research interests lie in ethnography, evaluation, and sociolinguistics. His publications raise questions about the differential impact of education and schooling in comparative and international contexts and seek solutions to social and educational conditions in the promotion of alternative paradigms, epistemologies, and methods for the way the oppressed and marginalized succeed in global societies.
Ove Karlsson Vestman is Professor of Education at Mälardalen University, Sweden, where he directs the Mälardalen Evaluation Academy. He has been appointed Visiting Professor in the Department for Applied Social Science at London Metropolitan University and Visiting Professor in Social Work at Övebro University, Sweden. He was one of the founders and the first vice president of the Swedish Evaluation Society. His work concentrates on building evluation capacity. In this work, he typically uses participatory and mixed-method approaches. He has published on dialogue methods and critical theory, as well as the role of values and politics in evaluation.
Brock M. Klein, EdD, is Director of Pasadena City College's Teaching and Learning Center (TLC), which is committed to helping underprepared, first-generation college students move successfully from basic skills to transfer-level courses. Dr. Klein developed the TLC in 2000 with funds from a U.S. Department of Education Title V grant and currently manages several private and federal grants. In addition to his work with basic skills instructors and students, he serves on the advisory board for California's Basic Skills Resource Network and is Associate Professor of ESL.
Saville Kushner is Professor of Applied Research in Education and Professor of Public Evaluation at the University of the West of England. He is an advocate of both the theory and practice of Democratic and Rights-Based Evaluation. Between 2005 and 2007, he served as Regional Officer for Monitoring and Evaluation in UNICEF (Latin America and the Caribbean) and continues to serve as a consultant. For many years, he worked at the Centre for Applied Research in Education at the University of East Anglia, which was prominent in the early advocacy and development of case-based approaches to program evaluation. His evaluation work covers diverse areas of professional enquiry, including schooling, police training, nurse and medical education, and the performing arts. Saville serves in editorial positions for the American Journal of Evaluation and the AEA monograph series, Advances in Program Evaluation.
Miri Levin-Rozalis, PhD, Sociologist and Psychologist, is a faculty member of the Department of Education at the Ben-Gurion University, the head of the track for Education Management and Policy, and the head of the Graduate and Post Graduate Program in Evaluation. In the Mofet Institute, she is the co-head of the qualification program in evaluation for teachers' trainers. She is the cofounder and the former president of IAPE (the Israeli Association for Program Evaluation) and has practiced evaluation for almost 30 years. Her current research interest is the sociology of evaluation in Israel and in the world.[Page 583]
Ulf P. Lundgren, Professor, took his doctorate degree in Göteborg, Sweden, in 1972. He became Professor of Psychology and Education in Denmark and in 1975 at Stockholm Institute of Education, Sweden. Later he was Vice Chancellor of the Institute. In 1990, he became Director General for the Swedish National Agency for Education. He has served as a chairman for several committees, including the committee for a national curriculum, and the committee for an educational act. Lundgren has been an expert in various positions within educational ministries in Sweden, Norway, France, and Portugal, as well as within the European Union, OECD, and UNESCO. He served in the steering group that formed the PISA evaluations at OECD. Today, Lundgren has a personal chair at Uppsala University, where he is leading a research group working with studies on educational policy and evaluation.
Linda Mabry is Professor of Educational Psychology at Washington State University Vancouver, specializing in research methodology, program evaluation, and the assessment of K-12 student achievement. She has served on the Board of Directors of the American Evaluation Association and on the Board of Trustees of the National Center for the Improvement of Educational Assessment. She practices case study methodology in research and program evaluation and publishes empirical examples as well as methodological texts such as the one in this volume.
Melvin M. Mark is Professor and Head of Psychology at Penn State University. A past president of the American Evaluation Association, he has also served as Editor of the American Journal of Evaluation, where he is now Editor Emeritus. Among his books are Evaluation: An Integrated Framework for Understanding, Guiding, and Improving Policies and Programs (with Gary Henry and George Julnes), the SAGE Handbook of Evaluation (with Ian Shaw and Jennifer Greene), What Counts As Credible Evidence in Applied Research and Evaluation Practice (with Stewart Donaldson and Tina Christie), Evaluation in Action: Interviews With Expert Evaluators (with Jody Fitzpatrick and Tina Christie), and the forthcoming Social Psychology and Evaluation (with Stewart Donaldson and Bernadette Campbell).
Jacob Marszalek is Assistant Professor of Counseling and Educational Psychology in the School of Education at the University of Missouri-Kansas City. His research interests include applying new quantitative techniques to address research questions in education, program evaluation, testing, and psychology. As a student at the University of Illinois, he assisted with several program evaluations funded at the local, state, and national levels. He consults regularly on the design and implementation of grant evaluations in the Midwest.
Sandra Mathison is Professor of Education at the University of British Columbia. Her research focuses on educational evaluation and especially on the potential and limits of evaluation to support democratic ideals and promote justice. She has conducted national large- and small-scale evaluations of K-12, postsecondary, and informal educational programs and curricula; published articles in the leading evaluation journals; and edited and authored a number of books. She is editor of the Encyclopedia of Evaluation and co-editor (with E. Wayne Ross) of The Nature and Limits of Standards Based Reform and Assessment and Battleground Schools; she is co-author (with Melissa Freeman) of Researching Children's Experiences; and she is Editor-in-Chief of the journal New Directions for Evaluation.
Gerry McNamara, PhD, is Professor of Education at the School of Education Studies, Dublin City University. His research interests include educational evaluation and practitioner research. He is an active member of both the European Evaluation Society and the Irish Evaluation Network, and he is a member of the Council of the British Educational Leadership, Management and Administration Society. Their (see O'Hara) most recent publications include Trusting Schools and Teachers: Developing [Page 584]Educational Professionalism Through Self-Evaluation (Peter Lang, 2008) and “The Importance of the Concept of Self-Evaluation in the Changing Landscape of Educational Policy,” Studies in Educational Evaluation, 34, 173–179.
Matthew Militello is Assistant Professor in Educational Leadership and Policy Studies at North Carolina State University and has taught at University of Massachusetts, Amherst. Prior to his academic career, Militello was a middle and high school teacher and administrator. His current research centers on preparing school leaders. Most recently, Militello coauthored Leading With Inquiry and Action: How Principals Improve Teaching and Learning, a book that provides explicit examples of how principals enact collaborative inquiry-action cycles to increase student achievement. He has published in Education and Urban Society, Harvard Educational Review, Journal of School Leadership, and Qualitative Inquiry.
David Nevo, Tel Aviv University, Israel, is Professor Emeritus at the School of Education, Tel Aviv University. His professional interests include evaluation theory, program evaluation, school-based evaluation, and student assessment. His current research is focused on dialogue evaluation, combining internal and external evaluation and working with schools and teachers to improve their evaluation capabilities and their ability to cope with external evaluation requirements and accountability. Dr. Nevo is the author of Evaluation in Decision Making (with Glasman, Kluwer, 1988) and School-Based Evaluation: A Dialogue for School Improvement (Pergamon, 1995), the editor of School-Based Evaluation: An International Perspective (Elsevier, 2002) and a past editor-in-chief of Studies in Educational Evaluation. He served as Head of School of Education, Tel Aviv University, and Chief Scientist of the Israeli Ministry of Education.
Theo J. H. Niessen, PhD, is a senior researcher and lecturer at Fontys University of Applied Sciences-Faculty of Nursing. He is also appointed as the head of an ethics committee at a home for elderly people. Within his PhD, he developed an enactivist epistemological framework to understand teachers' learning processes during responsive evaluation. Currently his research is concentrating on ethics and moral deliberation and practice improvement.
Joe O'Hara, PhD, is Senior Lecturer at the School of Education Studies, Dublin City University, with responsibility for Initial Teacher Education. His research interests include educational evaluation and initial teacher education. He is an active member of the Irish Evaluation Network and European Evaluation Society. He is currently a member of the General Council of the European Educational Research Association and is Vice President of the Educational Studies Association of Ireland. Their (see McNamara) most recent publications include Trusting Schools and Teachers: Developing Educational Professionalism Through Self-Evaluation (Peter Lang, 2008) and “Importance of the Concept of Self-Evaluation in the Changing Landscape of Educational Policy,” Studies in Educational Evaluation, 34, 173–179.
Sharon F. Rallis is Dwight W. Allen Distinguished Professor of Education Policy and Reform at the University of Massachusetts, Amherst. A past president of the American Evaluation Association, Rallis has worked with evaluation for over three decades and has published extensively in evaluation journals. Her research focuses on local implementation of policy-driven programs. She has taught on education leadership and policy faculties at University of Connecticut, Harvard, and Vanderbilt. Rallis' doctorate is from Harvard University. Her books include Learning in the Field: An Introduction to Qualitative Research (with Gretchen Rossman) and Leading with Inquiry and Action: How Principals Improve Teaching and Learning (with Matthew Militello).[Page 585]
Dana K. Rickman was recently named the Director for Research and Policy at the Annie E. Casey Foundation, Atlanta Civic Site. Before then, she was a Senior Research Associate at Georgia State University, Andrew Young School for Policy Studies. Rickman has participated in a variety of public policy evaluations including North Carolina's Disadvantaged Student Supplemental Fund, Georgia's Universal Pre-K, and Georgia's TANF system. Rickman has published in the fields of evaluation, and education policy.
Fazal Rizvi has been a Professor in the Department of Educational Policy Studies at the University of Illinois since 2001, having previously held academic and administrative appointments at a number of universities in Australia. His new book, Globalizing Education Policy (Routledge), will appear in 2009. He has written widely on theories of globalization, educational, and cultural policy and the internationalization of higher education. He is currently researching higher education in India, especially with respect to the ways in which Indian universities are engaging with issues of globalization and the knowledge economy. At Illinois, he directs an online program for teachers around the world in Global Studies in Education. See http://gse.ed.uiuc.edu.
Barbara Rosenstein, PhD, grew up in New York and studied at Brooklyn College of the City University of New York, the University of Chicago, and Ben Gurion University of the Negev. After 2 years in the Peace Corps in Tunisia and several years of teaching ESL and French in Connecticut, she moved to Israel. In 1984, she was introduced to the field of evaluation through work with the Bernard van Leer Foundation and has studied, taught, and practiced evaluation ever since. She developed a method of using video for evaluation and has given talks and workshops on the subject. Her main focus has been on community-based programs concerned with education, empowerment, and co-existence. She is a founding member and now chairperson of the Israeli Association for Program Evaluation and was on the first board of the International Organization for Cooperation in Evaluation. Barbara teaches Theory of Evaluation and Ethics in Evaluation at Ben Gurion University of the Negev.
Thomas A. Schwandt is Professor and Chair, Department of Educational Psychology, at the University of Illinois at Urbana-Champaign, where he also holds an appointment in the Department of Educational Policy Studies. He is the author of Evaluation Practice Reconsidered; Evaluating Holistic Rehabilitation Practice; Dictionary of Qualitative Inquiry; and, with Edward Halpern, Linking Auditing and Meta-Evaluation. In 2002, he received the Paul F. Lazarsfeld Award from the American Evaluation Association for his contributions to evaluation theory. He is currently a member of the Standing Committee on Research and Evidentiary Standards, National Research Council, Division of Behavioral and Social Sciences and Education.
Michael Scriven is Professor of Psychology at Claremont Graduate University and Senior Research Associate at the Evaluation Center, Western Michigan University. He took two degrees in mathematics from Melbourne University in Australia and then a doctorate in philosophy at Oxford. His 400+ publications are in four fields: computer studies, philosophy of technology, historiography, and educational research. He was on the faculty at the University of California/Berkeley for 12 years, as well as at Swarthmore and the Universities of Minnesota, Indiana, Western Australia, and Auckland. He has held fellowships at Harvard and the Center for Advanced Study in the Behavioral Sciences at Stanford, among others; has served as president of the American Educational Research Association and the American Evaluation Association; and is on the editorial or review boards of 42 journals.
Christina Segerholm is Senior Lecturer at MidSweden University, Sweden. Her research interest is mainly directed toward critical studies of evaluation impact in education. A more recent interest is [Page 586]evaluation as global policy and governance. Some of her studies include National Evaluations as Governing Instruments: How Do They Govern?, Evaluation, 7(4); Governance Through Institutionalized Evaluation: Recentralization and Influences at Local Levels in Higher Education in Sweden (co-author Eva Åström), Evaluation, 13(1); and New Public Management and Evaluation under Decentralizing Regimes in Education Dilemmas of Engagement: Evaluation and the New Public Management (Saville Kushner & Nigel Norris, Eds.).
Nick L. Smith (PhD, University of Illinois, 1975) is Professor and Chairperson in the Instructional Design, Development and Evaluation Department, School of Education, at Syracuse University. He has served on numerous editorial boards, including as Editor of New Directions for Evaluation. Professor Smith has received distinguished awards from the Association of Teacher Educators, the American Psychological Association, and the American Evaluation Association. He is a Fellow in the American Educational Research Association and the American Psychological Association, and is a 2004 President of the American Evaluation Association. His primary research interests concern the theory and methods of evaluation and applied research.
Peter M. Steiner is Assistant Professor in the Department of Sociology at the Institute for Advanced Studies in Vienna and Visiting Assistant Professor at Northwestern University. He holds a master's degree and a doctorate in statistics from the University of Vienna, as well as a master's degree in economics from the Vienna University of Economics and Business Administration. His research interests are in the methodology of causal inference, particularly quasi-experimental designs in education and experimental vignette designs in survey research.
Claudia V. Tamassia, MEd (Columbia, MO), PhD (Champaign, IL), works at the Educational Testing Service coordinating the OECD Programme for the International Assessment of Adult Competencies (PIAAC). Her primary interests are international education and comparative and international assessment. She has worked at the Ministry of Education in Brazil, at the OECD in Paris in managing the Programme for International Student Assessment, and at the Chicago Public Schools. As a consultant, she has worked with UNESCO and taught at the University of Illinois. She completed her undergraduate studies in Brazil and her graduate work at the University of Missouri-Columbia and at the University of Illinois at Urbana-Champaign.
Harry Torrance is Professor of Education and Director of the Education and Social Research Institute, Manchester Metropolitan University, UK. He has conducted many empirical studies of student assessment and has published widely in the fields of assessment, program evaluation, and education reform. He is an elected member of the UK Academy of Social Sciences.
Cees P. M. van der Vleuten is Professor in Education at the department of Education, Maastricht University. He is appointed as Professor of Education at the Faculty of Health, Medicine, and Life Sciences; Chair of the Department of Educational Development and Research; and Scientific Director of the School of Health Professions Education (http://www.she.unimaas.nl). His area of expertise lies in evaluation and assessment. He has published widely on these topics and holds several academic awards for this work. He has frequently served as an educational consultant internationally.
Guy A. M. Widdershoven is Professor in Philosophy and Ethics of Medicine at the VU University Medical Centre, EMGO Institute, Department of Medical Humanities, Amsterdam. His work concentrates on the development of contextual approaches to ethics (hermeneutic ethics, dialogical ethics, narrative ethics, ethics of care) in chronic care (psychiatry, care for the elderly, care for persons with an intellectual disability).[Page 587]
Angela Wroblewski is Assistant Professor in the Department of Sociology at the Institute for Advanced Studies in Vienna and Lecturer at the Vienna University of Economics and Business Administration and the University of Vienna. She teaches research methods to students at the BA, MA, and PhD levels. Her research interests are in evaluation research (especially of education and labor market programs with a gender focus) and equal opportunities in education and labor markets.