SAGE Handbook of Research on Classroom Assessment

SAGE Handbook of Research on Classroom Assessment

Handbooks

Edited by: James H. McMillan

Abstract

The Sage Handbook of Research on Classroom Assessment provides scholars, professors, graduate students, and other researchers and policy makers in the organizations, agencies, testing companies, and school districts with a comprehensive source of research on all aspects of K-12 classroom assessment. The handbook emphasizes theory, conceptual frameworks, and all varieties of research (quantitative, qualitative, mixed methods) to provide an in-depth understanding of the knowledge base in each area of classroom assessment and how to conduct inquiry in the area. It presents classroom assessment research to convey, in depth, the state of knowledge and understanding that is represented by the research, with particular emphasis on how classroom assessment practices affect student achievement and teacher behavior. Editor James H. McMillan and five Associate Editors bring the best ...

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • Copyright

    View Copyright Page

    Foreword

    Lorrie A.ShepardUniversity of Colorado Boulder

    Why should we care about a topic as mundane as classroom assessment? Assessments used in classrooms are not so elegant as psychometric models, and they are highly local and private to individual teachers’ classrooms. Yet, everyday and every-week assessments determine the very character and quality of education; they set the actual, on-the-ground goals for learning and delimit the learning opportunities provided.

    In the past decade, a great deal of attention has been paid to the research evidence documenting the potential power of formative assessment to greatly enhance learning – a potential that has yet to be realized. Of even greater potency – often for ill rather than good – is the power of classroom summative assessment to convey what is important to learn. Tests, quizzes, homework assignments, and questions at the end of the chapter implicitly teach students what learning is about. If students come to hold a highly proceduralized view of mathematics or think of science knowledge as vocabulary lists, classroom summative assessments are largely to blame. Classroom assessments can also distort the how and why of learning, if they signal for students that the purpose for learning is to perform well on tests.

    To reflect learning that matters, classroom summative measures, whether projects, portfolios – or tests – must be deeply grounded in subject-matter content and processes. And, to support deep learning, formative assessments must elicit student thinking and provide substantive insights rather than quantitative score reports. Research on classroom assessment, therefore, must be the province of subject-matter experts and learning scientists as much as or even more than that of measurement experts.

    This handbook is intended for an expanded community of researchers, graduate students in search of dissertation topics, and curriculum reformers. It is a compendium of research, gathering together what we know now and highlighting what we need to learn more about in order to improve practice. The chapters in this volume go deep into studies of specific topics, including feedback, grading, self-assessment, performance assessments, and validity. Unlike recent volumes that consider only the importance of formative assessment, this handbook takes up both formative and summative classroom assessment, which is a critically important step toward conceiving the two in a way that does not put them at cross purposes. But, for the work that follows to be effective, there needs to be a coherent story line that links all of these pieces and the two purposes together. Contemporary learning theory provides the needed conceptual glue and offers a coherent, explanatory framework by which effective practices can be understood and analyzed. Sociocultural learning theory, in particular, subsumes findings from motivational psychology and helps to advance instructional practices that foster both student engagement and higher levels of thinking and reasoning.

    A decade ago, when I wrote about “the role of assessment in a learning culture,” I chose the concept of culture because of its pervasiveness and integrated nature. I was trying to get at the underlying fabric, linking meanings and classroom interaction patterns that had created a “testing culture” and think instead about the profound shifts that would need to occur to establish a learning culture. Culture is about deeply rooted, but dynamic, shared beliefs and patterns of behavior. It is not a list of cognitive and affective variables elicited one-at-a-time, but a complex set of woven-together assumptions and meanings about what is important to do and be. These shared assumptions may be tacit, invisible, like the air we breathe, yet they are potent. It is this integrated nature of learning environments that must be understood, if we are to design for productive learning. Just as the authors of next-generation science standards are realizing, for example, that big ideas (content) and scientific practices have to be engaged together, classroom assessment researchers and developers must realize how their pieces contribute to the larger picture. There cannot, for example, be one theory of learning and motivation for formative assessment and a different one for summative assessment.

    From earlier cognitive research, we know that learning is an act of sense-making whereby learners construct new knowledge and understandings by drawing connections with what they already know. Given the centrality of prior knowledge, formative assessment practices should focus on teacher noticing and instructional routines intended to make student thinking visible. Then, sociocultural theory goes further to explain how social interactions with others and with cultural tools enable co-construction of what is taken into the mind. According to Vygotsky's concept of the zone of proximal development, learning occurs as a student tries out thinking and reasoning with the assistance of more knowledgeable others. This learning-theory idea of supported participation or scaffolding is congruent with Royce Sadler's seminal conception of formative assessment. For assessment to enable new learning, the student must: 1) come to have a shared understanding of quality work similar to that of the teacher, 2) be able to compare the current level of performance using quality criteria, and 3) be able to take action to close the gap. Assessment is the middle step, but it need not be called out as separate from the learning process. For formative assessment practices to be consistent with this view of learning, there should be clear attention to learning goals in terms that are accessible to students, evaluation by means of shared criteria as to where students are in relation to goals, and tailored feedback that offers specific guidance about how to improve.

    The foregoing account emphasizing the cognitive aspects of learning, however, tells only half of the story. From a sociocultural perspective, socially supported instructional activity is more than gap closing but rather fully engages the cognitive, metacognitive, and motivational aspects of learning. It is not surprising, for example, that self-regulation is a theme that repeats across a dozen of the chapters in this volume. When students are engaged in meaningful activity, they see models of proficient participation and seek to emulate them. They are “motivated” to improve by an increasing identity of mastery and at the same time develop the metacognitive skills needed to be able to self-monitor their own performance. Formative assessment practices enhance learning when students are positioned as thoughtful contributors to classroom discourse and have a sense of ownership in criteria used by a community of practice. Author's Chair is an example of an instructional practice that highlights the joint construction of writing skill and identity as an author and also makes “critique” and improvement based on feedback an authentic part of becoming more adept. For these motivational claims to hold true, however, instructional content and activities have to be compelling and worthwhile.

    This view of motivation, where students invest effort to get good at something, is often seen in out-of-school contexts – learning so as to be a dancer, a musician, a gamer. When in-school experiences lack this sense of purpose, it is most often because of dreary content and grading practices that reward compliance rather than learning. The measurement literature is replete with studies documenting the mixture of considerations that go into teachers’ grades. The majority of teachers use points and grades to “motivate” their students. Contrary to their intentions, this leads to the commodification of learning and fosters a performance orientation rather than a mastery or learning orientation. While economists certainly have shown that incentives do work, what we have learned from countless motivational studies is that the use of external rewards diminishes intrinsic motivation. Most poignantly, when young children are rewarded with pizza or stickers for activities like drawing or reading, they like the activities less after the rewards stop than before the reinforcement schedule began. Many teachers also feel strongly about including effort as a factor in determining grades, for motivational reasons. But again this distorts the causal chain. Instead of effort enabling learning that leads to a grade, effort and sometimes the performance of effort produces a grade. In addition to the motivational harm, the achievement goals – toward which student and teacher were working jointly in the formative assessment model – are obscured.

    In her chapter on grading, Susan Brookhart reviews the rationale for achievement-based or standards-based grading and the few available studies. Formative assessment and summative assessment are compatible, if they focus on the same rich, challenging, and authentic learning goals and if feedback in the midst of instruction leads to internalized understandings and improved performance on culminating, summative tasks. This is not the same thing as adding up points on multiple interim assignments. Researchers and reformers who want to develop grading practices more in keeping with research on learning and motivation must keep in mind several obstacles. First, they are working against long-standing beliefs on the part of teachers and students, and the older students are, the more explicit negotiations must be to refocus effort on learning. Second, teachers today are under enormous pressure to keep parents informed and to maintain real-time data systems. Often this means that points are given for completing work, not what was learned; and even when quality is assessed, the grade is recorded as if the learning were finished. Third, in an environment of interim and benchmark testing, the nature of feedback is often distorted, emphasizing to students that they need to get three more items correct to pass a standard rather than what substantive aspect of a topic still needs to be understood. The specific ways that these dilemmas can be addressed will vary tremendously according to the age of the students, from the earliest grade levels where letter grades need not be assigned to secondary school contexts where formative processes can serve a coaching purpose to help students attain the verification of skills they need for external audiences.

    Of course, all of these claims about processes and meanings of assessment won't amount to much if the contents of assessments are not transformed to direct learning toward more ambitious thinking and reasoning goals. For learning activities to pull students in, they must offer a sense of purpose and meaning. Typically this requires authenticity and a connection to the world (although fantasy worlds with sufficient complexity and coherence can also be compelling). Trying to develop rich curriculum and capture higher-order thinking is an old and familiar problem, going back 100 years if you look hard enough. More immediately, the literature of the past 20 years reminds us that declaring world-class standards and promising to create more authentic assessments – “tests worth teaching to,” – are intentions that have been tried before and are frequently undone.

    A new round of content standards holds the promise of providing much more coherently developed learning sequences with attention to depth of understanding. And, as previously mentioned, recently-developed standards have also attended to mathematical and scientific practices – arguing from evidence, developing and using models, and so forth – as well as big-idea content strands. The hope, too, is that next-generation assessments will more faithfully represent the new standards than has been the case for large-scale assessments in the past. While the knowledge exists to make it possible to construct much more inventive assessments, this could more easily be done in the context of small-scale curriculum projects than for large-scale, high-stakes accountability tests. In the latter case, the constraints imposed by cost, demands for speedy score reporting, and the need for curriculum neutrality across jurisdictions can quickly drive out innovation and substantive depth. Only time will tell if these new promises for focused and coherent content standards and next-generation assessments will be achieved.

    Research on classroom assessment has the potential to make a tremendous contribution to improve teaching and learning, if it were focused on making these grand theoretical claims come true – both regarding learning theory and motivation and more fulsome representations of content and disciplinary practices. We probably do not need more studies documenting limitations of current practice. Instead, a theme repeated across the many studies reported in this handbook is the need to plan for and support teacher learning. Teachers need access to better tools, not disconnected item banks but rather curricular tasks that have been carefully designed to elicit student thinking and for which colleagues and curriculum experts have identified and tested out follow-up strategies. Learning progressions are one way to frame this kind of recursive research and development work. Elsewhere, I've also argued for the design of replacement units, which are carefully designed alternative curriculum units that, because they are modularized, are easier to adopt and try out. Such units would need to include rich instructional activities aimed at a combination of topics and standards (for example, developing and using models and heredity), formative questions and tasks, samples of student work illustrating the range of novice understandings and how to build on them, end-of-unit summative tasks, and possibly longer-term follow up questions and extensions connected to other units. The point is to recognize how much needs to be worked out and to acknowledge the impossibility of every teacher inventing this well for every topic all on her or his own.

    What we know about teacher learning, in parallel to student learning, is that teachers need the opportunity to construct their own understandings in the context of their practice and in ways consistent with their identity as a thoughtful professional (rather than a beleaguered bureaucrat). Teachers need social support and a sense of purpose, hence the appeal of communities of practice (although mandated communities of practice may undo the intended meaning). The research literature also warns us that some obstacles are large enough and predictable enough that they should be attended to explicitly in the curriculum, assessment, and professional development design process. Specifically, the articulation between formative assessment (held safe from grading) and summative assessment could be a part of curriculum design, especially for model units used as part of professional development. In my experience, teachers can generalize new curricular and assessment ideas once they get the hang of it, but doing it well when everything is new is much more challenging. Another theme to be addressed explicitly is the relationship of rich, new curricular materials and high-stakes assessments. Groups of teachers jointly analyzing what's on the test, what's not, and how to stay true to more complete learning goals creates both greater awareness and a shared commitment to avoid narrow teaching to the test.

    Present-day learning theory has grown to encompass cognition, metacognition, and motivation, and has altered what it means to know and participate meaningfully in disciplinary knowledge communities. These perspectives should inform the design of curriculum and of classroom assessments intended to improve student learning and instructional practices. In addition, an understanding of these theories might also be used to examine the heterogeneity in study outcomes, explaining, for example, why two studies with the same-named intervention might produce different learning results.

    Preface

    This book is based on a simple assertion: Classroom assessment (CA) is the most powerful type of measurement in education that influences student learning. This premise lays the foundation for the need for research on CA. Noting the importance of CA is not new, but research in the area is sorely lacking. The measurement community has been and certainly currently is focused on large-scale assessment, mostly for high-stakes accountability testing. Indeed, what is happening with the changing nature of accountability testing, especially its use for teacher evaluation, will only heighten the already high stakes. We know that these large-scale tests influence what happens in the classroom, from what standards are emphasized and how pacing guides are built to cover standards, to the nature of levels of knowledge and understanding that are stressed. But it is in the nature of CAs that student motivation and learning are most affected. Whether summative or formative, classroom tests, quizzes, questioning, papers, projects, and other measures are what students experience on an ongoing basis, and these measures directly impact what and how students study and what they learn. How teachers conceptualize assessments that they use and how they are integrated (or not) with instruction have a direct influence on student engagement and learning.

    What we have emphasized in this book is research on and about CA to provide a better understanding of the theoretical, conceptual, and empirical evidence to inform the academic community in a way that will further principles of CA that will lead to best practice. This includes consideration of advances in learning and motivation theory and research that underpin classroom dynamics related to assessment, as well as more direct investigations of specific approaches to assessment.

    Our Purpose

    The aim of the SAGE Handbook of Research on Classroom Assessment is to present a comprehensive source of research on most aspects of K-12 CA and to begin to build an empirical foundation for research that will advance our understanding of CA. In this single text, there is presentation and analysis of all types of research on CA, with an emphasis on important conceptual and theoretical frameworks that are needed for establishing a solid and enduring research foundation. Overall, CA research is summarized and analyzed to convey, in depth, the state of knowledge and understanding that is represented by the research. There is a particular emphasis on how CA practices affect student achievement and teacher behavior. Leading CA researchers served as associate editors and authors, bringing the best thinking and analysis about the nature of research to each area.

    The handbook is written primarily for scholars, professors, graduate students, and other researchers and policy makers in universities, organizations, agencies, testing companies, and school districts. Practitioners will find value in specific chapters. The research does have implications for practice, but the main focus of the handbook is on summarizing and critiquing research, theories, and ideas to present a knowledge base about CA and the groundwork for completing research in the future. As such, the handbook will serve as an excellent text for graduate students taking assessment classes. Educational psychology, curriculum, and methods professors can use the handbook as a primary or supplementary text in their classes and as a source of knowledge to inform their research.

    Classroom Assessment as a Field of Study?

    The identity and research-based field of CA has been developing in earnest over the past two decades, spurred mostly perhaps by the work on formative assessment. Interestingly, while the first three editions of the Handbook of Research on Teaching (1963–1986; Gage, 1963; Gage & Travers, 1973; Wittrock, 1986) and Educational Measurement (1951–1989; Lindquist & Thorndike, 1951; Linn, 1989; Thorndike, 1971) did not contain a chapter on CA, the fourth editions did (Brennan, 2006; Richardson, 2001), both important chapters by Lorrie Shepard. Shepard's 2006 chapter in the fourth edition of the Educational Measurement handbook was one of the first major indications that the larger measurement and research community had interest in how teachers assess students on what is learned in the classroom, apart from the use of large-scale achievement tests (the National Council on Measurement in Education [NCME] has for years, though, promoted some research, training, and activity in CA).

    This new emphasis on CA occurred on the heels of work in the nature of cognition that has developed new ways of understanding learning and motivation. Contemporary learning and motivation theory now focus on constructivist paradigms that emphasize the importance of prior learning, social and cultural contexts, and deep understanding. A critically important text in assessment by Pellegrino, Chudowsky, and Glaser, Knowing What Students Know, published in 2001, formally integrated newer cognitive and constructivist theories of learning with the need for new and different CAs. The convergence of these factors has resulted in increased attention to CA as a field that has matured considerably since tests and measurement courses were used to teach teachers about assessment.

    Despite this increased attention, however, it is not entirely clear that CA is a distinct field of study. Perhaps it is an emerging field of study. The hallmarks of a field of study include a recognized, specialized language and terminology; journals, degrees; institutes; conferences; and forums for researchers to exchange and discuss research and ideas, develop new lines of research, and develop new knowledge. The CA community has some of these elements, as evidenced most perhaps by the continuing work on CA in England and the Classroom Assessment Special Interest Group of the American Educational Research Association (AERA). But there are no degrees that I'm aware of, no professional journal, and few conferences. So there is much to be done. The field of CA (if you can call it a field) does not have a strong and recognized research base that builds upon itself and is subject to the level of scrutiny required that transpires in journals, conferences, and degree programs.

    Organization

    The handbook is organized into six sections. The overall logic of the sequence of chapters is to first present underlying theory and contextual influences, then technical considerations. That is followed by chapters on formative and summative CA. In the last two sections, separate chapters are devoted to different methods of assessments and assessments in different subject areas. This organization was used to show how each of these areas is important in contributing to research that will advance our knowledge and under standing of effective CA.

    In the first section, chapters focus on conceptual and theoretical frames of reference and contextual factors that influence CA research. The emphasis is on examining previous CA research, learning and motivation research and theory, and the pervasive impact of large-scale, high-stakes testing. These factors help us to understand how to frame important CA research questions to result in principles that are well aligned with how students are motivated and learn.

    The second section, with leadership from Associate Editor Sarah M. Bonner, includes four chapters that examine technical measurement issues and principles that must be considered in conducting CA research. These include the three pillars of high quality CA—(1) validity, (2) reliability, and (3) fairness. Each of these is considered both from traditional measurement and classroom measurement perspectives. There is a long history that documents difficulty in applying principles of validity, reliability, and fairness to CA, and these chapters move our profession toward a more relevant set of principles that apply to what teachers do in the classroom. This section also includes a chapter that looks at techniques for how CA can be measured.

    The chapters in the third section, with leadership from Associate Editor Dylan Wiliam, focus on what has become the most recognized topic within measurement: formative assessment. In these chapters, the emphasis is on what occurs in the classroom, the more informal, ongoing process of gathering evidence, providing feedback, and making instructional correctives. In the fourth section, with leadership from Associate Editor Susan M. Brookhart, classroom summative assessment is addressed. This type of assessment should not be ignored or completely consumed by formative dynamics. What teachers do to document student learning is critical, as is the nature and use of reporting grades.

    In the fifth section, with leadership from Associate Editor Heidi L. Andrade, research in seven different methods of CAs is presented and analyzed. This includes separate chapters on constructed-response (CR) and selected-response (SR) types of items and tests, performance assessment, and portfolio assessment, as well as chapters that focus on more recently emphasized techniques to examine student self-assessment, peer-assessment, and social–emotional traits. The last section, with leadership from Associate Editor Jay Parkes, breaks out assessment by topic, with chapters focused on differentiated instruction (DI), students with disabilities, and different subjects.

    References
    Brennan, R. L. (ed.). (2006). Educational measurement. Westport, CT: Praeger.
    Gage, N. L. (ed.). (1963). Handbook of research on teaching. Chicago: Rand McNally.
    Gage, N. L., & Travers, R. M. W. (eds.) (1973). Handbook of research on teaching. Washington, DC: American Educational Research Association.
    Lindquist, E. F., & Thorndike, R. L. (1951). Educational measurement. Washington, DC: American Council on Education.
    Linn, R. L. (ed.). (1989). Educational measurement. Washington, DC: American Council on Education.
    Pellegrino, J. W., Chudowsky, N., & Glaser, R. (eds.). (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.
    Richardson, V. (ed.) (2001). Handbook of research on teaching. Washington, DC: American Educational Research Association.
    Shepard, L. A. (2006). Classroom assessment. In R. L.Brennan (ed.), Educational measurement (
    4th ed.
    , pp. 623–646). Westport, CT: Praeger.
    Thorndike, R. L. (ed.). (1971). Educational measurement. Washington, DC: American Council on Education.
    Wittrock, M. C. (ed.). (1986). Handbook of research on teaching. Washington, DC: American Educational Research Association.

    Acknowledgments

    Of course, a project this large can only be completed with much help from many talented people. All the section editors have been great. Their willingness to help select and communicate with chapter authors and edit chapters has been indispensible. I am especially grateful to Susan M. Brookhart, Heidi L. Andrade, and Sarah M. Bonner, who have repeatedly and willingly shared their great ideas and expertise about all aspects of putting this book together, including determining chapter content, organization, and assurance of high quality work. The heart of this book is in the talent and work of chapter authors, and I am grateful to each for their commitment and responsiveness to suggestions. Thanks to Cliff Conrad and Michael Connelly for sharing their experiences with editing previously published handbooks of research and to several anonymous reviewers of the book prospectus. Several graduate students here at Virginia Commonwealth University have been great with editing chores, including Amanda Turner, Divya Varier, and Reggie Brown. Finally, working with SAGE has been wonderful. I appreciate very much the faith showed by Diane McDaniel in approving the book, and Megan Koraly and Megan Markanich have been superb in keeping me on target, providing essential information and suggestions, and ushering the work through to completion.

  • About the Authors

    Heidi L. Andrade is an associate professor of educational psychology and the associate dean for academic affairs at the School of Education, University at Albany—State University of New York. She received her EdD from Harvard University Her research and teaching focus on the relationships between learning and assessment, with emphases on student self-assessment and self-regulated learning (SRL). She has written numerous articles, including an award-winning article on rubrics for Educational Leadership (1997). She edited a special issue on assessment for Theory Into Practice (2009), coedited The Handbook of Formative Assessment (2010) with Gregory Cizek, and is coediting a special issue of Applied Measurement in Education with Christina Schneider.

    Susan F. Belgrad is a professor of education with California State University, Northridge, where she leads graduate courses and professional development for teachers in differentiated learning and assessment, creating student portfolios and supporting media-rich classrooms with brain-based and student-centered practices. She received her EdD from the George Peabody College of Education at Vanderbilt University and has authored books and articles on portfolio assessment, including the third edition of The Portfolio Connection: Student Work Linked to Standards (2007) with Kay Burke and Robin Fogarty Her research is on the impact of teacher efficacy on student learning. She has taught at the elementary level and in early childhood special education and has served in leadership posts in higher education in early childhood and teacher leadership for pre-K-12.

    Paul Black is professor emeritus of science education at King's College London. He has made many contributions in both curriculum development and in assessment research. He has served on advisory groups of the National Research Council (NRC) and as visiting professor at Stanford University. His work on formative assessment with Dylan Wiliam and colleagues at King's has had widespread impact.

    Sarah M. Bonner is an associate professor in the Department of Educational Foundations and Counseling Programs at Hunter College, City University of New York. Prior to entering higher education, she worked in K-12 education for many years, as a teacher in programs for high-risk adolescents in Chicago and Southern Arizona, in dropout prevention and program development, and as an educational program evaluator. Her research focuses on the beliefs and skills of classroom teachers that relate to their formative and summative assessment practices and the cognitive and metacognitive processes used by test takers on tests of different types.

    Marc A. Brackett is a research scientist in the Department of Psychology at Yale University, the deputy director of Yale's Health, Emotion, and Behavior Laboratory and head of the Emotional Intelligence Unit in the Edward Zigler Center in Child Development and Social Policy. He is an author on more than 80 scholarly publications and codeveloper of The RULER Approach to Social and Emotional Learning, an evidence-based program teaching K-12 students and educators the skills associated with recognizing, understanding, labeling, expressing, and regulating emotions to promote positive social, emotional, and academic development.

    Susan M. Brookhart is an independent educational consultant. She is a former professor and chair of the Department of Educational Foundations and Leadership in the School of Education at Duquesne University, where she currently is senior research associate in the Center for Advancing the Study of Teaching and Learning (CASTL). She was the editor of Educational Measurement: Issues and Practice, a journal of the National Council on Measurement in Education (NCME) from 2007 to 2009. Her interests include the role of both formative and summative classroom assessment (CA) in student motivation and achievement, the connection between CA and large-scale assessment, and grading.

    Gavin T. L. Brown is an associate professor in the Faculty of Education at The University of Auckland. After growing up in Europe and Canada thanks to a military upbringing, he worked as a high school and adult teacher of English and English for speakers of other languages (ESOL) in New Zealand. He spent 9 years in standardized test development before working as an academic at the University of Auckland and the Hong Kong Institute of Education. His research focuses on school-based assessment, informed by psychometric theory, with a special interest in the social psychology of teacher and student responses to educational assessment. He is the author of Conceptions of Assessment (Nova, 2008) and has authored studies conducted in Spain, Cyprus, Hong Kong, China, Queensland, Louisiana, and New Zealand about teacher and student beliefs.

    William S. Bush is currently professor of mathematics education and director of the Center for Research in Mathematics Teacher Development at the University of Louisville. His research interests focus on the development and assessment of mathematics teacher knowledge and on mathematics assessments for students. He has led and participated in a number of large-scale teacher development projects funded by the National Science Foundation (NSF). He is a member of the National Council of Teachers of Mathematics (NCTM), the Association of Mathematics Teacher Educators, and the Kentucky Mathematics Coalition.

    Cynthia Campbell is an associate professor of educational research and assessment at Northern Illinois University. Her research and teaching interests include classroom assessment (CA), test development, and linguistic analysis. Campbell has published extensively in these areas and has given numerous refereed presentations at the state, regional, national, and international levels. Currently, she is the president of the Mid-Western Educational Research Association and a member of the American Educational Research Association (AERA), the National Council on Measurement in Education (NCME), and the American Counseling Association.

    Tedra Clark is a senior researcher at Mid-Continent Research for Education and Learning (McREL), where she leads applied research and evaluation projects aimed at improving educational outcomes through schoolwide interventions, classroom instruction, teacher professional development programs, and classroom assessment (CA). Dr. Clark was a coauthor and lead data analyst for a cluster randomized trial of the professional development program Classroom Assessment for Student Learning (CASL), funded by the U.S. Department of Education, Institute of Education Sciences. She was also the lead author of a large-scale research synthesis on CA funded by the Stupski Foundation. Prior to joining McREL, Dr. Clark was a graduate research assistant and adjunct professor of psychology at the University of Denver, where she facilitated both National Institutes of Health (NIH) – and National Science Foundation (NSF)-sponsored projects examining basic processes of learning and memory.

    Elizabeth Ann Claunch-Lesback is the director of curriculum for National History Day (NHD) and is professor emeritus of history education at the University of New Mexico in Albuquerque. Dr. Claunch-Lesback's educational experience includes 14 years as a public school teacher and 12 years as a university professor. She has researched, presented, and written professionally on history education and adult education.

    Bronwen Cowie is director of the Wilf Malcolm Institute of Educational Research, The University of Waikato. She has expertise in classroom-based focus group and survey research and has led a number of large externally funded projects focused on assessment for learning (AFL), curriculum implementation, and information and communication technology (ICT)/e-learning. Her particular research interests include student views of assessment and AFL interactions in primary science and technology classrooms.

    Karla L. Egan is a research manager at CTB/McGraw-Hill where she manages a diverse group of research scientists and research associates and provides leadership on issues related to assessment development, assessment policy, and psychometrics. She provides guidance to state departments of education for their customized assessments, and she works with educators through standard setting and form selection workshops. She has led or supported over 60 standard settings for statewide test programs. Her research has been published in the Peabody Journal of Education, and as a noted expert in standard setting, she has been the lead author of several book chapters on this topic. She led the development of a framework for developing the achievement level descriptors that guide test development, standard setting, and score reporting. Her current research focuses on standard setting, test security, identification of aberrant anchor items, and language assessment for English language learners (ELLs).

    Carolin Hagelskamp earned her PhD in community psychology at New York University and was a postdoctoral research associate at the Health, Emotion, and Behavior Laboratory at Yale University. Her academic research focuses on contextual factors in early adolescent development, including work–family, classroom climate, race/ethnicity, and immigration. She is currently director of research at Public Agenda, where she conducts public opinion research on social policy issues such as K–12 education reform.

    Thomas M. Haladyna is professor emeritus of the Mary Lou Fulton Teachers College at Arizona State University. He specializes in item and test development and validation. He has authored or edited 14 books; more than 60 journal articles; and hundreds of conference papers, white papers, opinions, and technical reports. He has interviewed with and assisted media in many investigations of test fraud.

    Lois Harris is an honorary research fellow at the University of Auckland and teaches in CQUniversity Australia's postgraduate program. Her research examines relationships between educational stakeholders’ thinking and their practices, with recent studies investigating assessment and student engagement. She also has a strong interest in both qualitative and quantitative research methodologies. Previously, she was a secondary school teacher in the United States and Australia, working in both mainstream and distance education modes.

    Margaret Heritage is assistant director for professional development at the National Center for Research on Evaluation, Standards, & Student Testing (CRESST) at the University of California, Los Angeles. For many years, her work has focused on research and practice in formative assessment. In addition to publishing extensively on formative assessment, she has made numerous presentations on the topic all over the United States, in Europe, Australia, and Asia.

    Thomas P. Hogan is professor of psychology and distinguished university fellow at the University of Scranton, where he previously served as dean of the Graduate School, director of research, and interim provost/academic vice president. He is the author or coauthor of four books on measurement and research methods; several nationally used standardized tests; and over 150 published articles, chapters, and presentations related to psychological and educational measurement. He holds a bachelor's degree from John Carroll University and both master's and doctoral degrees from Fordham University, with a specialization in psychometrics.

    Marc W. Julian is a research manager at CTB/McGraw-Hill. He manages a diverse group of research scientists and research associates, providing technical and managerial support to this team of researchers. Dr. Julian was the lead research scientist for TerraNova, The Second Edition. Dr. Julian has also served as the lead research scientist for many statewide assessment projects, including the groundbreaking Maryland School Performance Assessment Program (MSPAP). His research has been published in Applied Measurement in Education, Educational Measurement: Issues and Practice, Journal of Educational Measurement, and Structural Equation Modeling.

    Suzanne Lane is a professor in the Research Methodology Program at the University of Pittsburgh. Her research and professional interests are in educational measurement and testing—in particular, design, validity, and technical issues related to large-scale assessment and accountability systems, including performance-based assessments. Her work is published in journals such as Applied Measurement in Education, Educational Measurement: Issues and Practice, and the Journal of Educational Measurement. She was the president of the National Council on Measurement in Education (NCME) (2003–2004), Vice President of Division D of the American Educational Research Association (AERA) (2000–2002), and a member of the AERA, American Psychological Association (APA), and NCME Joint Committee for the Revision of the Standards for Educational and Psychological Testing (1993–1999).

    Min Li, associate professor at College of Education, University of Washington, is an assessment expert deeply interested in understanding how student learning can be accurately and adequately assessed both in large-scale testing and classroom settings. Her research and publications reflect a combination of cognitive science and psychometric approaches in various projects, including examining the cognitive demands of state test science items, analyzing teachers’ classroom assessment (CA) practices, developing instruments to evaluate teachers’ assessment practices, and using science notebooks as assessment tools.

    Maggie B. McGatha is an associate professor of mathematics education in the Department of Middle and Secondary Education in the College of Education and Human Development at the University of Louisville. Dr. McGatha teaches elementary and middle school mathematics methods courses as well as courses on coaching and mentoring. Her research interests are mathematics teacher professional development, mathematics coaching, and mathematics assessment.

    James H. McMillan is professor and chair of the Department of Foundations of Education at Virginia Commonwealth University, where he has been teaching for 32 years. Dr. McMillan is also executive director of the Metropolitan Educational Research Consortium, a partnership between Virginia Commonwealth University and eight Richmond, Virginia, school districts that plans, executes, and disseminates results of applied research on issues of importance to the schools. He has published several books, including Educational Research: Fundamentals for the Consumer and Classroom Assessment: Principles and Practice for Effective Standards-Based Instruction, has published extensively in journals, and presented nationally and internationally on assessment in education and research methods.

    Tonya R. Moon is a professor in the Curry School of Education at the University of Virginia. Her specializations are in the areas of educational measurement, research, and evaluation. She works with educational institutions both nationally and internationally on using better assessment techniques for improving instruction and student learning.

    Connie M. Moss is an associate professor in the Department of Foundations and Leadership, School of Education at Duquesne University in Pittsburgh, where she also directs the Center for Advancing the Study of Teaching and Learning (CASTL). During her career, she has taught in urban public schools; directed regional and statewide initiatives focused on bringing excellence and equity to all students; and worked extensively in schools with teachers, principals, and central office administrators. Her current work includes the study of formative assessment with a particular focus on the relationships among effective teaching, self-regulated student learning, formative educational leadership, and social justice.

    Jay Parkes is currently chair of the Department of Individual, Family & Community Education and an associate professor of Educational Psychology at the University of New Mexico where he teaches graduate courses in classroom assessment (CA), educational measurement, introductory and intermediate statistics, and research design. His areas of expertise include performance and alternative assessments, CA, and feedback, which he pursues both in dual language education and in medical education.

    Judy M. Parr is a professor of education and head of the School of Curriculum and Pedagogy in the Faculty of Education at the University of Auckland. Her particular expertise is in writing, encompassing how writing develops, the cultural tools of literacy, and considerations of instructional issues like teacher knowledge and practice and, in particular, assessment of written language. A major research focus concerns school change and improvement in order to ensure effective practice and raise achievement. Judy has published widely in a range of international journals spanning literacy, technology, policy and administration, and school change. Two books cowritten or edited with Professor Helen Timperley bridge theory and practice: Using Evidence in Teaching Practice: Implications for Professional Learning (2004) and Weaving Evidence, Inquiry and Standards to Build Better Schools (2010).

    Bruce Randel is an independent research consultant, based in Centennial, Colorado, providing services in education research, statistical analysis, educational measurement and psychometrics, and technical reporting. He is former principal researcher at Mid-Continent Research for Education and Learning (McREL) and senior research scientist at CTB/McGraw-Hill. His research interests include randomized controlled trials (RCTs), measurement of formative assessment practice, test and instrument development, and statistical analyses.

    Susan E. Rivers is a research scientist in the Department of Psychology at Yale University. She is the associate director of the Health, Emotion, and Behavior Laboratory at Yale and a fellow at the Edward Zigler Center in Child Development and Social Policy. She is a codeveloper of The RULER Approach to Social and Emotional Learning and the achievement model of emotional literacy, as well as several curricula designed to help children, educators, and parents become emotionally literate. In her grant-funded research, she investigates how emotional literacy training affects positive youth development and creates supportive learning environments. Dr. Rivers is the coauthor of many scholarly articles and papers, and she trains educators and families on the RULER programs.

    Michael C. Rodriguez is associate professor and coordinator of Quantitative Methods in Education in the Department of Educational Psychology at the University of Minnesota. He received his PhD in measurement and quantitative methods at Michigan State University. His research interests include item writing, test accessibility, reliability theory, meta-analysis, and item response models and multilevel modeling. He is a member of the Academy of Distinguished Teachers at the University of Minnesota; on the editorial boards of Applied Measurement in Education, Educational Measurement: Issues & Practice, and Journal of Educational Measurement; a member of the board of directors of the National Council on Measurement in Education (NCME); and a recipient of the Harris Research Award from the International Reading Association.

    Maria Araceli Ruiz-Primo is an associate professor at the School of Education and Human Development, University of Colorado Denver. Her work focuses on two strands: (1) assessment of students learning at both large-scale and classroom level and (2) the study of teachers’ assessment practices. Her publications reflect such strands: (1) developing and evaluating different strategies to assess students’ learning such as concept maps and students’ science notebooks and (2) studying teachers’ informal and formal formative assessment practices such as the use of assessment conversations and embedded assessments. Her recent work focuses on the development and evaluation of assessments that are instructionally sensitive and instruments to measure teachers’ formative assessment practices She also co-edited a special issue on assessment for the Journal of Research in Science Teaching.

    M. Christina Schneider is a research scientist at CTB/McGraw-Hill. She has worked as the lead research scientist on numerous state assessment projects and is a member of the CTB Standard Setting Team, helping to establish cut scores for large-scale assessments across the United States. She was the principal investigator on a federally funded $1.7 million multisite cluster randomized trial investigating the effects of a professional development program in formative classroom assessment (CA) on teacher and student achievement. Her research has been published in Applied Measurement in Education, Peabody Journal of Education, Journal of Multidisciplinary Evaluation, and Journal of Psychoeducational Assessment. Her areas of expertise include formative CA, automated essay scoring, standard setting, identification of aberrant anchor items, and assessment in the arts.

    Robin D. Tierney (University of Ottawa) is an independent researcher and writer in San Jose, California. Previously, she taught elementary English language arts and graduate level research methodology courses in Ontario, Canada. Her doctoral work focused on teachers’ practical wisdom (phronesis) about fairness in classroom assessment (CA). She has presented at educational research conferences in Canada, Europe, and the United States and has published several articles about CA. Her current research interests include the quality and ethics of CA and the use of complementary methods in educational research.

    Carol Ann Tomlinson is the William Clay Parrish Jr. Professor and chair of Educational Leadership, Foundations, and Policy at the University of Virginia's Curry School of Education where she also serves as codirector of Curry's Institutes on Academic Diversity. Prior to joining the University of Virginia faculty, she was a public school teacher for 21 years.

    Keith J. Topping is a professor and director of the Centre for Paired Learning at the University of Dundee in Scotland. His research interests are in the development and evaluation of methods for nonprofessionals (such as parents or peers) to tutor others one-to-one in fundamental skills and higher order learning across many different subjects, contexts, and ages. He also has interests in electronic literacy and computer-aided assessment and in behavior management and social competence in schools. His publications include over 20 books, 50 chapters, 150 peer reviewed journal papers, 30 distance learning packages, and other items. He presents, trains, consults, and engages in collaborative action and research around the world.

    Cheryl A. Torrez is an assistant professor in the Department of Teacher Education at the University of New Mexico. Her research interests include curriculum and instruction in elementary classrooms, school–university partnerships, and teacher education across the professional life span. She is a former elementary school teacher and teaches undergraduate and graduate courses in curriculum and instruction and teacher inquiry. She is currently the vice president of the New Mexico Council for the Social Studies and cochair of the Teacher Education and Professional Development Community of the National Council for the Social Studies (NCSS).

    Dylan Wiliam is emeritus professor of Educational Assessment at Institute of Education, University of London, where from 2006 to 2010 he was its deputy director. In a varied career, he has taught in urban public schools, directed a large-scale testing program, served a number of roles in university administration, including dean of a School of Education, and pursued a research program focused on supporting teachers to develop their use of assessment in support of learning. From 2003 to 2006, he was senior research director at the Educational Testing Service in Princeton, New Jersey, and now works as an independent consultant, dividing his time between the United States and the United Kingdom.

    Yaoying Xu is an associate professor in the Department of Special Education and Disability Policy at Virginia Commonwealth University. She teaches graduate courses including assessment, instructional programming, and multicultural perspectives in education as well as a doctoral course that focuses on research design, funding, and conducting research in special education. Her research interests involve culturally and linguistically appropriate assessment and instruction for young children and students with diverse backgrounds, impact of social interactions on school performance, and empowering culturally diverse families of young children and students with disabilities in the process of assessment and intervention. She has published over 30 peer-reviewed journal articles and book chapters in the fields of general education, special education, and multicultural education.


    • Loading...
Back to Top