The SAGE Handbook of Applied Social Research Methods

Handbooks

Edited by: Leonard Bickman & Debra J. Rog

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • Copyright

    View Copyright Page

    Acknowledgments

    The editors are grateful for the assistance of Peggy Westlake in managing the complex process of developing and producing this Handbook.

    Publisher's Acknowledgments

    SAGE Publications gratefully acknowledges the contributions of the following reviewers:

    Neil Boyd, Pennsylvania State University, Capital College

    Julie Fleury, Arizona State University

    Steven Rogelberg, University of North Carolina, Charlotte

    Introduction: Why a Handbook of Applied Social Research Methods?

    LeonardBickmanDebraJ.Rog

    This second edition of the Handbook of Applied Social Research Methods includes 14 chapters revised and updated from the first edition as well as 4 new chapters. We selected the combination of chapters in this second edition to represent the cutting edge of applied social research methods and important changes that have occurred in the field in the decade since the first edition was published.

    One area that continues to gain prominence is the focus on qualitative research. In the first edition, 4 of the 18 chapters were focused on the qualitative approach; in this edition, a third of the Handbook now focuses on that approach. Moreover, research that combines quantitative and qualitative research methods, called mixed methods, has become a much more common requirement for studies. In Chapter 9, Abbas Tashakorri and Charles Teddlie present an approach to integrating qualitative and quantitative methods with an underlying belief that qualitative and quantitative methods are not dichotomous or discrete but are on a continuum of approaches.

    Another change that is reflected in many of the revised chapters as well as in two of the new chapters is the increasing use of technology in research. The use of the Internet and computer-assisted methods is discussed in several of the chapters and is the focus of Samuel Best and Chase Harrison's chapter (Chapter 13) on Internet survey methods. In addition, Mary Kane and Bill Trochim's contribution on concept mapping in Chapter 14 offers a cutting-edge technique involving both qualitative and quantitative methods in designing research.

    Finally, Michael Harrison's chapter on organizational diagnosis is a new contribution to this Handbook edition. Harrison's approach focuses on using methods and models from the behavioral and organization sciences to help identify what is going on in an organization and to help guide decisions based on this information.

    In addition to reflecting any new developments that have occurred (such as the technological changes noted above), other changes that have been made in this edition respond to comments made about the first edition, with an emphasis on increasing the pedagogical quality of each of the chapters and the book as a whole. In particular, the text has been made more “classroom friendly” with the inclusion of discussion questions and exercises. The chapters also are current with new research cited and improved examples of those methods. Overall, however, research methods are not an area that is subject to rapid changes.

    This version of the Handbook, like the first edition, presents the major methodological approaches to conducting applied social research that we believe need to be in a researcher's repertoire. It serves as a “handy” reference guide, covering key yet often diverse themes and developments in applied social research. Each chapter summarizes and synthesizes major topics and issues of the method and is designed with a broad perspective but provides information on additional resources for more in-depth treatment of any one topic or issue.

    Applied social research methods span several substantive arenas, and the boundaries of application are not well-defined. The methods can be applied in educational settings, environmental settings, health settings, business settings, and so forth. In addition, researchers conducting applied social research come from several disciplinary backgrounds and orientations, including sociology, psychology, business, political science, education, geography, and social work, to name a few. Consequently, a range of research philosophies, designs, data collection methods, analysis techniques, and reporting methods can be considered to be “applied social research.” Applied research, because it consists of a diverse set of research strategies, is difficult to define precisely and inclusively. It is probably most easily defined by what it is not, thus distinguishing it from basic research. Therefore, we begin by highlighting several differences between applied and basic research; we then present some specific principles relevant to most of the approaches to applied social research discussed in this Handbook.

    Distinguishing Applied from Basic Social Research

    Social scientists are frequently involved in tackling real-world social problems. The research topics are exceptionally varied. They include studying physicians' efforts to improve patients' compliance with medical regimens, determining whether drug use is decreasing at a local high school, providing up-to-date information on the operations of new educational programs and policies, evaluating the impacts of environmental disasters, and analyzing the likely effects of yet-to-be-tried programs to reduce teenage pregnancy. Researchers are asked to estimate the costs of everything from shopping center proposals to weapons systems and to speak to the relative effectiveness of alternative programs and policies. Increasingly, applied researchers are contributing to major public policy debates and decisions.

    Applied research uses scientific methodology to develop information to help solve an immediate, yet usually persistent, societal problem. The applied research environment is often complex, chaotic, and highly political, with pressures for quick and conclusive answers yet little or no experimental control. Basic research, in comparison, also is firmly grounded in the scientific method but has as its goal the creation of new knowledge about how fundamental processes work. Control is often provided through a laboratory environment.

    These differences between applied and basic research contexts can sometimes seem artificial to some observers, and highlighting them may create the impression that researchers in the applied community are “willing to settle” for something less than rigorous science. In practice, applied research and basic research have many more commonalities than differences; however, it is critical that applied researchers (and research consumers) understand the differences. Basic research and applied research differ in purposes, context, and methods. For ease of presentation, we discuss the differences as dichotomies; in reality, however, they fall on continua.

    Differences in Purpose
    Knowledge Use versus Knowledge Production

    Applied research strives to improve our understanding of a “problem,” with the intent of contributing to the solution of that problem. The distinguishing feature of basic research, in contrast, is that it is intended to expand knowledge (i.e., to identify universal principles that contribute to our understanding of how the world operates). Thus, it is knowledge, as an end in itself, that motivates basic research. Applied research also may result in new knowledge, but often on a more limited basis defined by the nature of an immediate problem. Although it may be hoped that basic research findings will eventually be helpful in solving particular problems, such problem solving is not the immediate or major goal of basic research.

    Broad versus Narrow Questions

    The applied researcher is often faced with “fuzzy” issues that have multiple, often broad research questions, and addresses them in a “messy” or uncontrolled environment. For example, what is the effect of the provision of mental health services to people living with AIDS? What are the causes of homelessness?

    Even when the questions are well-defined, the applied environment is complex, making it difficult for the researcher to eliminate competing explanations (e.g., events other than an intervention could be likely causes for changes in attitudes or behavior). Obviously, in the example above, aspects of an individual's life other than mental health services received will affect that person's well-being. The number and complexity of measurement tasks and dynamic real-world research settings pose major challenges for applied researchers. They also often require that researchers make conscious choices (trade-offs) about the relative importance of answering various questions and the degree of confidence necessary for each answer.

    In contrast, basic research investigations are usually narrow in scope. Typically, the basic researcher is investigating a very specific topic and a very tightly focused question. For example, what is the effect of white noise on the short-term recall of nonsense syllables? Or what is the effect of cocaine use on fine motor coordination? The limited focus enables the researcher to concentrate on a single measurement task and to use rigorous design approaches that allow for maximum control of potentially confounding variables. In an experiment on the effects of white noise, the laboratory setting enables the researcher to eliminate all other noise variables from the environment, so that the focus can be exclusively on the effects of the variable of interest, the white noise.

    Practical versus Statistical Significance

    There are differences also between the analytic goals of applied research and those of basic research. Basic researchers generally are most concerned with determining whether or not an effect or causal relationship exists, whether or not it is in the direction predicted, and whether or not it is statistically significant. In applied research, both practical significance and statistical significance are essential. Besides determining whether or not a causal relationship exists and is statistically significant, applied researchers are interested in knowing if the effects are of sufficient size to be meaningful in a particular context. It is critical, therefore, that the applied researcher understands the level of outcome that will be considered “significant” by key audiences and interest groups. For example, what level of reduced drug use is considered a practically significant outcome of a drug program? Is a 2% drop meaningful? Thus, besides establishing whether the intervention has produced statistically significant results, applied research has the added task of determining whether the level of outcome attained is important or trivial.

    Theoretical “Opportunism” versus Theoretical “Purity”

    Applied researchers are more likely than basic researchers to use theory instrumentally. Related to the earlier concept of practical significance, the applied researcher is interested in applying and using a theory only if it identifies variables and concepts that will likely produce important, practical results. Purity of theory is not as much a driving force as is utility. Does the theory help solve the problem? Moreover, if several theories appear useful, then the applied researcher will combine them, it is hoped, in a creative and useful way. For those involved in evaluation research, they are most often trying to understand the “underlying theory” or logic of the program or policy they are studying and using that to guide the research.

    For the basic researcher, on the other hand, it is the underlying formal theory that is of prime importance. Thus, the researcher will strive to have variables in the study that are flawless representations of the underlying theoretical constructs. In a study examining the relationships between frustration and aggression, for example, the investigator would try to be certain that the study deals with aggression and not another related construct, such as anger, and that frustration is actually manipulated, and not boredom.

    Differences in Context
    Open versus Controlled Environment

    The context of the research is a major factor in accounting for the differences between applied research and basic research. As noted earlier, applied research can be conducted in many diverse contexts, including business settings, hospitals, schools, prisons, and communities. These settings, and their corresponding characteristics, can pose quite different demands on applied researchers. The applied researcher is more concerned about generalizability of findings. Since application is a goal, it is important to know how dependent the results of the study are on the particular environment in which it was tested. In addition, lengthy negotiations are sometimes necessary for a researcher even to obtain permission to access the data.

    Basic research, in contrast, is typically conducted in universities or similar academic environments and is relatively isolated from the government or business worlds. The environment is within the researcher's control and is subject to close monitoring.

    Client Initiated versus Researcher Initiated

    The applied researcher often receives research questions from a client or research sponsor, and sometimes these questions are poorly framed and incompletely understood. Clients of applied social research can include federal government agencies, state governments and legislatures, local governments, government oversight agencies, professional or advocacy groups, private research institutions, foundations, business corporations and organizations, and service delivery agencies, among others. The client is often in control, whether through a contractual relationship or by virtue of holding a higher position within the researcher's place of employment (if the research is being conducted internally). Typically, the applied researcher needs to negotiate with the client about the project scope, cost, and deadlines. Based on these parameters, the researcher may need to make conscious trade-offs in selecting a research approach that affects what questions will be addressed and how conclusively they will be addressed.

    University basic research, in contrast, is usually self-initiated, even when funding is obtained from sources outside the university environment, such as through government grants. The idea for the study, the approach to executing it, and even the timeline are generally determined by the researcher. The reality is that the basic researcher, in comparison with the applied researcher, operates in an environment with a great deal more flexibility, less need to let the research agenda be shaped by project costs, and less time pressure to deliver results by a specified deadline. Basic researchers sometimes can undertake multiyear incremental programs of research intended to build theory systematically, often with supplemental funding and support from their universities.

    Research Team versus Solo Scientist

    Applied research is typically conducted by research teams. These teams are likely to be multidisciplinary, sometimes as a result of competitive positioning to win grants or contracts. Moreover, the substance of applied research often demands multidisciplinary teams, particularly for studies that address multiple questions involving different areas of inquiry (e.g., economic, political, sociological). These teams must often comprise individuals who are familiar with the substantive issue (e.g., health care) and others who have expertise in specific methodological or statistical areas (e.g., economic forecasting).

    Basic research is typically conducted by an individual researcher who behaves autonomously, setting the study scope and approach. If there is a research team, it generally comprises the researcher's students or other persons that the researcher chooses from the same or similar disciplines.

    Differences in Methods
    External versus Internal Validity

    A key difference between applied research and basic research is the relative emphasis on internal and external validity. Whereas internal validity is essential to both types of research, external validity is much more important to applied research. Indeed, the likelihood that applied research findings will be used often depends on the researchers' ability to convince policymakers that the results are applicable to their particular setting or problem. For example, the results from a laboratory study of aggression using a bogus shock generator are not as likely to be as convincing or as useful to policymakers who are confronting the problem of violent crime as are the results of a well-designed survey describing the types and incidence of crime experienced by inner-city residents.

    The Construct of Effect versus the Construct of Cause

    Applied research concentrates on the construct of effect. It is especially critical that the outcome measures are valid—that they accurately measure the variables of interest. Often, it is important for researchers to measure multiple outcomes and to use multiple measures to assess each construct fully. Mental health outcomes, for example, may include measures of daily functioning, psychiatric status, and use of hospitalization. Moreover, measures of real-world outcomes often require more than self-report and simple paper-and-pencil measures (e.g., self-report satisfaction with participation in a program). If attempts are being made to address a social problem, then real-world measures directly related to that problem are desirable. For example, if one is studying the effects of a program designed to reduce inter-group conflict and tension, then observations of the interactions among group members will have more credibility than group members' responses to questions about their attitudes toward other groups. In fact, there is much research evidence in social psychology that demonstrates that attitudes and behavior often do not relate.

    Basic research, on the other hand, concentrates on the construct of cause. In laboratory studies, the independent variable (cause) must be clearly explicated and not confounded with any other variables. It is rare in applied research settings that control over an independent variable is so clear-cut. For example, in a study of the effects of a treatment program for drug abusers, it is unlikely that the researcher can isolate the aspects of the program that are responsible for the outcomes that result. This is due to both the complexity of many social programs and the researcher's inability in most circumstances to manipulate different program features to discern different effects.

    Multiple versus Single Levels of Analysis

    The applied researcher, in contrast to the basic researcher, usually needs to examine a specific problem at more than one level of analysis, not only studying the individual, but often larger groups, such as organizations or even societies. For example, in one evaluation of a community crime prevention project, the researcher not only examined individual attitudes and perspectives but also measured the reactions of groups of neighbors and neighborhoods to problems of crime. These added levels of analysis may require that the researcher be conversant with concepts and research approaches found in several disciplines, such as psychology, sociology, and political science, and that he or she develop a multidisciplinary research team that can conduct the multilevel inquiry.

    Similarly, because applied researchers are often given multiple questions to answer, because they must work in real-world settings, and because they often use multiple measures of effects, they are more likely to use multiple research methods, often including both quantitative and qualitative approaches. Although using multiple methods may be necessary to address multiple questions, it may also be a strategy used to triangulate on a difficult problem from several directions, thus lending additional confidence to the study results. Although it is desirable for researchers to use experimental designs whenever possible, often the applied researcher is called in after a program or intervention is in place, and consequently is precluded from building random assignment into the allocation of program resources. Thus, applied researchers often use quasi-experimental studies. The obverse, however, is rarer; quasi-experimental designs are generally not found in the studies published in basic research journals.

    The Orientation of this Handbook

    This second edition is designed to be a resource for professionals and students alike. It can be used in tandem with the Applied Social Research Methods Series that is coedited by the present editors. The series has more than 50 volumes related to the design of applied research, the collection of both quantitative and qualitative data, and the management and presentation of these data. Almost all the authors in the Handbook also authored a book in that series on the same topic.

    Similar to our goal as editors of the book series, our goal in this Handbook is to offer a hands-on, how-to approach to research that is sensitive to the constraints and opportunities in the practical and policy environments, yet is rooted in rigorous and sound research principles. Abundant examples and illustrations, often based on the authors' own experience and work, enhance the potential usefulness of the material to students and others who may have limited experience in conducting research in applied arenas. In addition, discussion questions and exercises in each chapter are designed to increase the usefulness of the Handbook in the classroom environment.

    The contributors to the Handbook represent various disciplines (sociology, business, psychology, political science, education, economics) and work in diverse settings (academic departments, research institutes, government, the private sector). Through a concise collection of their work, we hope to provide in one place a diversity of perspectives and methodologies that others can use in planning and conducting applied social research. Despite this diversity of perspectives, methods, and approaches, several central themes are stressed across the chapters. We describe these themes in turn below.

    The Iterative Nature of Applied Research

    In most applied research endeavors, the research question—the focus of the effort—is rarely static. Rather, to maintain the credibility, responsiveness, and quality of the research project, the researcher must typically make a series of iterations within the research design. The iteration is necessary not because of methodological inadequacies, but because of successive redefinitions of the applied problem as the project is being planned and implemented. New knowledge is gained, unanticipated obstacles are encountered, and contextual shifts take place that change the overall research situation and in turn have effects on the research. The first chapter in this Handbook, by Bickman and Rog, describes an iterative approach to planning applied research that continually revisits the research question as trade-offs in the design are made. In Chapter 7, Maxwell also discusses the iterative, interactive nature of qualitative research design, highlighting the unique relationships that occur in qualitative research among the purposes of the research, the conceptual context, the questions, the methods, and validity.

    Multiple Stakeholders

    As noted earlier, applied research involves the efforts and interests of multiple parties. Those interested in how a study gets conducted and its results can include the research sponsor, individuals involved in the intervention or program under study, the potential beneficiaries of the research (e.g., those who could be affected by the results of the research), and potential users of the research results (such as policymakers and business leaders). In some situations, the cooperation of these parties is critical to the successful implementation of the project. Usually, the involvement of these stakeholders ensures that the results of the research will be relevant, useful, and hopefully used to address the problem that the research was intended to study.

    Many of the contributors to this volume stress the importance of consulting and involving stakeholders in various aspects of the research process. Bickman and Rog describe the role of stakeholders throughout the planning of a study, from the specification of research questions to the choice of designs and design trade-offs. Similarly, in Chapter 4, on planning ethically responsible research, Sieber emphasizes the importance of researchers' attending to the interests and concerns of all parties in the design stage of a study. Kane and Trochim, in Chapter 14, offer concept mapping as a structured technique for engaging stakeholders in the decision making and planning of research.

    Ethical Concerns

    Research ethics are important in all types of research, basic or applied. When the research involves or affects human beings, the researcher must attend to a set of ethical and legal principles and requirements that can ensure the protection of the interests of all those involved. Ethical issues, as Boruch and colleagues note in Chapter 5, commonly arise in experimental studies when individuals are asked to be randomly assigned into either a treatment condition or a control condition. However, ethical concerns are also raised in most studies in the development of strategies for obtaining informed consent, protecting privacy, guaranteeing anonymity, and/or ensuring confidentiality, and in developing research procedures that are sensitive to and respectful of the specific needs of the population involved in the research (see Sieber, Chapter 4; Fetterman, Chapter 17). As Sieber notes, although attention to ethics is important to the conduct of all studies, the need for ethical problem solving is particularly heightened when the researcher is dealing with highly political and controversial social problems, in research that involves vulnerable populations (e.g., individuals with AIDS), and in situations where stakeholders have high stakes in the outcomes of the research.

    Enhancing Validity

    Applied research faces challenges that threaten the validity of studies' results. Difficulties in mounting the most rigorous designs, in collecting data from objective sources, and in designing studies that have universal generalizability require innovative strategies to ensure that the research continues to produce valid results. Lipsey and Hurley, in Chapter 2, describe the link between internal validity and statistical power and how good research practice can increase the statistical power of a study. In Chapter 6, Mark and Reichardt outline the threats to validity that challenge experiments and quasi-experiments and various design strategies for controlling these threats. Henry, in his discussion of sampling in Chapter 3, focuses on external validity and the construction of samples that can provide valid information about a broader population. Other contributors in Part III (Fowler & Cosenza, Chapter 12; Lavrakas, Chapter 16; Mangione & Van Ness, Chapter 15) focus on increasing construct validity through the improvement of the design of individual questions and overall data collection tools, the training of data collectors, and the review and analysis of data.

    Triangulation of Methods and Measures

    One method of enhancing validity is to develop converging lines of evidence. As noted earlier, a clear hallmark of applied research is the triangulation of methods and measures to compensate for the fallibility of any single method or measure. The validity of both qualitative and quantitative applied research is bolstered by triangulation in data collection. Yin (Chapter 8), Maxwell (Chapter 7), and Fetterman (Chapter 17) stress the importance of triangulation in qualitative research design, ethnography, and case study research. Similarly, Bickman and Rog support the use of multiple data collection methods in all types of applied research.

    Qualitative and Quantitative

    Unlike traditional books on research methods, this volume does not have separate sections for quantitative and qualitative methods. Rather, both types of research are presented together as approaches to consider in research design, data collection, analysis, and reporting. Our emphasis is to find the tools that best fit the research question, context, and resources at hand. Often, multiple tools are needed, cutting across qualitative and quantitative boundaries, to research a topic thoroughly and provide results that can be used. Chapter 9 by Tashakkori and Teddlie specifically focuses on the use of mixed methods designs.

    Several tools are described in this Handbook. Experimental and quasi-experimental approaches are discussed (Boruch et al., Chapter 5; Mark & Reichardt, Chapter 6; Lipsey & Hurley, Chapter 2) alongside qualitative approaches to design (Maxwell, Chapter 7), including case studies (Yin, Chapter 8) and ethnographies (Fetterman, Chapter 17) and approaches that are influenced by their setting (Harrison, Chapter 10). Data collection tools provided also include surveys (in person, mail, Internet, and telephone), focus groups (Stewart, Shamdasani, & Rook, Chapter 18), and newer approaches such as concept mapping (Kane & Trochim, Chapter 14).

    Technological Advances

    Recent technological advances can help applied researchers conduct their research more efficiently, with greater precision, and with greater insight than in the past. Clearly, advancements in computers have improved the quality, timeliness, and power of research. Analyses of large databases with multiple levels of data would not be possible without high-speed computers. Statistical syntheses of research studies, called meta-analyses (Cooper, Patall, & Lindsay, Chapter 11), have become more common in a variety of areas, in part due to the accessibility of computers. Computers are required if the Internet is going to be used for data collection as described by Best and Harrison in Chapter 13. Qualitative studies can now benefit from computer technology, with software programs that allow for the identification and analysis of themes in narratives (Tashakkori & Teddlie, Chapter 9), programs that simply allow the researcher to organize and manage the voluminous amounts of qualitative data typically collected in a study (Maxwell, Chapter 7; Yin, Chapter 8), and laptops that can be used in the field to provide for efficient data collection (Fetterman, Chapter 17). In addition to computers, other new technology provides for innovative ways of collecting data, such as through videoconferencing (Fetterman, Chapter 17) and the Internet.

    However, the researcher has to be careful not to get caught up in using technology that only gives the appearance of advancement. Lavrakas points out that the use of computerized telephone interviews has not been shown to save time or money over traditional paper-and-pencil surveys.

    Research Management

    The nature of the context in which applied researchers work highlights the need for extensive expertise in research planning. Applied researchers must take deadlines seriously, and then design research that can deliver useful information within the constraints of budget, time, and staff available. The key to quality work is to use the most rigorous methods possible, making intelligent and conscious trade-offs in scope and conclusiveness. This does not mean that any information is better than none, but that decisions about what information to pursue must be made very deliberately with realistic assessments of the feasibility of executing the proposed research within the required time frame. Bickman and Rog (Chapter 1), and Boruch et al. (Chapter 5) describe the importance of research management from the early planning stages through the communication and reporting of results.

    Conclusion

    We hope that the contributions to this Handbook will help guide readers in selecting appropriate questions and procedures to use in applied research. Consistent with a handbook approach, the chapters are not intended to provide the details necessary for readers to use each method or to design comprehensive research; rather, they are intended to provide the general guidance readers will need to address each topic more fully. This Handbook should serve as an intelligent guide, helping readers select the approaches, specific designs, and data collection procedures that they can best use in applied social research.

  • Author Index

    About the Editors

    Leonard Bickman, PhD, is Professor of psychology, psychiatry, and public policy. He is Director of the Center for Evaluation and Program Improvement and Associate Dean for Research at Peabody College. He is a nationally recognized leader in program evaluation and mental health services research on children and adolescents. He has published more than 15 books and monographs and 190 articles and chapters and has been principal investigator on more than 25 major grants from several agencies. He is coeditor of The SAGE Handbook of Social Research Methods and coauthor with Debra Rog of the very popular book Applied Research Design: A Practical Guide. He earned his PhD in psychology (social) from the City University of New York, his master's degree in experimental psychopathology from Columbia University, and his bachelor's degree from the City College of New York.

    Debra J. Rog, PhD, is Associate Director with Westat and Vice President of The Rockville Institute. Prior to joining Westat in January of 2007, she was a senior research associate and directed the Washington office of Vanderbilt University's Center for Evaluation and Program Improvement (CEPI) for 17 years. She has nearly 30 years of experience in program evaluation and applied research. She has numerous publications on evaluation and research methods as well as homelessness, housing, poverty, and mental health. She is currently president-elect of the American Evaluation Association and has served on its board of directors. She completed an appointment on the Advisory Committee of Women's Services for the U.S. Substance Abuse and Mental Health Services Administration and has been recognized for her evaluation work by the National Institute of Mental Health, the American Evaluation Association, the Eastern Evaluation Research Society, and the Knowledge Utilization Society. With Leonard Bickman, she coedits the SAGE Applied Research Methods Series of textbooks. She received her PhD in social psychology from Vanderbilt University.

    About the Contributors

    Samuel J. Best is Associate Professor of Political Science and Director of the Center for Survey Research and Analysis at the University of Connecticut. He has written numerous academic articles and books, including a volume for Sage, titled Internet Data Collection.

    Robert F. Boruch is University Trustee Chair Professor of Education and Professor of Statistics at the Wharton School of Business, University of Pennsylvania. Prior to joining University of Pennsylvania, he held faculty appointments at Northwestern University and University of Chicago, and research positions with the Social Science Research Council, American Council of Education and National Academy of Sciences. His primary research interests are statistical research and policy, design of controlled field experiments, and ethics and data access in surveys among other topics. He is an expert on research methods for evaluating programs and currently consults with multiple government agencies, including the General Accounting Office, the Department of Education, and the Department of Justice. He has won many professional and teaching awards, including the American Educational Research Association Research Review Award, American Evaluation Association's Gunnar and Alva Myrdal Award, and the Donald T. Campbell Award from the Policy Studies Organization. He obtained his PhD from Iowa State University and his BE from Stevens Institute of Technology.

    Harris M. Cooper is Professor and Director of the Program in Education at Duke University. His research interests include research synthesis, applications of social psychology to educational policy issues, homework, school calendars, and after-school programs. He earned his doctorate degree in social psychology from the University of Connecticut.

    Carol Cosenza joined the Center for Survey Research, University of Massachusetts at Boston in 1988. She is currently a project manager and also coordinates the Center's cognitive testing and question evaluation work. She has been involved in all phases of the survey process—from question design to data coding and analysis. The recent focus of her methodological research has been comparing different ways that survey questions can be evaluated and how to understand what is learned from that testing. She has also been working on a series of studies of how the details of question wording affect data quality. She graduated from Dartmouth College and had her MSW from Boston University.

    David M. Fetterman, PhD, is Director of Evaluation in the School of Medicine at Stanford University. He is concurrently Collaborating Professor, Colegio de Postgraduados, Mexico, Distinguished Visiting Professor at San Jose State University, and Professor of Education, University of Arkansas, Pine Bluff, and Director of the Arkansas Evaluation Center. For the past decade, he was Director of the MA Policy Analysis and Evaluation Program in the School of Education and a Consulting Professor of Education. He is the past president of the American Evaluation Association and the American Anthropological Association's Council on Anthropology and Education. He received both the Paul Lazarsfeld Award for Outstanding Contributions to Evaluation Theory and the Myrdal Award for Cumulative Contributions to Evaluation Practice—the American Evaluation Association's highest honors. He has conducted evaluation projects throughout the world, including Australia, Brazil, Finland, Japan, Mexico, Nepal, New Zealand, South Africa, Spain, the United Kingdom, and the United States. He has contributed to a variety of encyclopedias, including the International Encyclopedia of Education, the Encyclopedia of Human Intelligence, the Encyclopedia of Evaluation, and the Encyclopedia of Social Science Research Methods. He is also the author or editor of 10 books, including Empowerment Evaluation Principles in Practice, Ethnography: Step by Step (2nd ed.), and Excellence and Equality: A Qualitatively Different Perspective on Gifted and Talented Education. He received his PhD from the Stanford University.

    Floyd J. Fowler Jr. has been a senior research fellow at the Center for Survey Research at University of Massachusetts Boston since 1971. He was Director of the Center for 14 years. He is the author (or coauthor) of four textbooks on survey methods, as well as numerous research papers and monographs. His recent work has focused on studies of question design and evaluation techniques and applying survey methods to studies of medical care. He received a PhD from the University of Michigan in 1966.

    Chase H. Harrison is Preceptor in Survey Research in the Department of Government, Faculty of Arts and Sciences, Harvard University. He has focused his career on implementing survey research protocols in an academic setting. He was the founding methodologist of the Center for Survey Research and Analysis at the University of Connecticut and previously worked at the Roper Center for Public Opinion Research and at Market Strategies, Inc., in Southfield, Michigan. He received his PhD in political science and MA in political science with a concentration in survey research from the University of Connecticut.

    Michael I. Harrison is an internationally known scholar of organizations and health systems. He is a senior research scientist at the Agency for Healthcare Research and Quality in Rockville, Maryland, where he leads work on delivery system change and process redesign. He holds a doctorate in sociology from the University of Michigan. He has been a faculty member at the State University of New York at Stony Brook and Bar-Ilan University in Israel, a visiting professor at the School of Management at Boston College, and a visiting scholar at Brandeis University, Georgetown University, Harvard Business School, and the Nordic School of Public Health. He has worked as a consultant and conducted research in businesses, services, government organizations, worker-managed cooperatives, and voluntary groups. His publications include Diagnosing Organizations: Methods, Models, and Processes (Sage, 2005, 3rd ed.), Organizational Diagnosis and Assessment: Bridging Theory and Practice (with A. Shirom; Sage, 1999), and Implementing Change in Health Systems: Market Reforms in the United Kingdom, Sweden, and the Netherlands (Sage, 2004). His current research deals with system transformation and with unintended consequences of implementing health information technology.

    Gary T. Henry holds the Duncan MacRae ‘09 and Rebecca Kyle MacRae Professorship of Public Policy in the Department of Public Policy and directs the Carolina Institute for Public Policy at the University of North Carolina (UNC) at Chapel Hill. Also, he holds the appointment as Senior Statistician in Frank Porter Graham Institute for Child Development at UNC-Chapel Hill. He has evaluated a variety of policies and programs, including North Carolina's Disadvantaged Student Supplemental Fund, Georgia's Universal Pre-K, public information campaigns, and the HOPE Scholarship as well as school reforms and accountability systems. The author of Practical Sampling (Sage, 1990), Graphing Data (Sage, 1995) and coauthor of Evaluation: An Integrated Framework for Understanding, Guiding, and Improving Policies and Programs (2000), he has published extensively in the fields of evaluation and education policy. He received the Outstanding Evaluation of the Year Award from the American Evaluation Association in 1998 and the Joseph S. Wholey Distinguished Scholarship Award in 2001 from the American Society for Public Administration and the Center for Accountability and Performance.

    Sean M. Hurley is Research Assistant Professor at the University of Vermont's James M. Jeffords Institute. His interests include field research methodology, multilevel modeling, and missing data augmentation. His recent work has been primarily in the context of early childhood education. He received a doctorate degree in cognitive psychology from Vanderbilt University in 2003, and he recently completed an Institute of Education Sciences postdoctoral fellowship, also at Vanderbilt, focused on applying rigorous experimental methods to field research.

    Mary Kane is President and CEO of Concept Systems, Inc., an organization that partners with federal, state, and local social service and public health interests as well as academic institutions and businesses. She has developed customized process and group consulting for federal, state, and county agencies, health and mental health organizations, private corporations, not-for-profit agencies, and school districts and has facilitated with groups ranging from small boards of directors to organizations represented by thousands of stakeholders. With William Trochim, she is the coauthor of the methodology book for social researchers, Concept Mapping for Planning and Evaluation (Sage, 2007). She cofounded Concept Systems, Inc., in 1993 after a successful career in the management and growth of community-based cultural and learning organizations. Her current methodology and service interests include supporting grant-funded centers in start-up management skills for researchers and the linkage of planning, action, and evaluation in public sector organizations.

    Allison Karpyn is Director of Research and Evaluation at The Food Trust, a Philadelphia-based nonprofit organization committed to providing access to affordable nutritious foods. In addition, she teaches program planning and evaluation as well as community assessment courses in the MPH program at Drexel University. She is a member of The American Public Health Association, Society for Public Health Education and the American Evaluation Association and certified as a professional researcher by the Marketing Research Association. She earned her bachelors degree in public health at The Johns Hopkins University and her master's and doctorate degrees in policy research evaluation and measurement at The University of Pennsylvania.

    Paul J. Lavrakas, is a research psychologist and is currently serving as a methodological research consultant for several public and private sector organizations. He served as vice president and chief methodologist for Nielsen Media Research from 2000 to 2007. Previously, he was a professor of journalism and communication studies at Northwestern University (1978–1996) and at Ohio State University (OSU; 1996–2000). During his academic career, he was the founding faculty director of the Northwestern University Survey Lab (1982–1996) and the OSU Center for Survey Research (1996–2000). Among his publications, he has written a widely read book on telephone survey methodology and served as the lead editor for three books on election polling, the news media, and democracy, as well as coauthoring four editions of The Voter's Guide to Election Polls. He served as a guest editor for a special issue of Public Opinion Quarterly on “Cell Phone Numbers and Telephone Surveys” (2007, Vol. 71, No. 5), and also is the editor of the Encyclopedia of Survey Research Methods that Sage will publish in 2008. He was a corecipient of the 2003 AAPOR Innovators Award for his work on the standardization of survey response rate calculations.

    James J. Lindsay has worked as a program evaluator, specialized in developing and implementing evaluations of publicly funded programs. As a social psychologist trained in basic research, he has an excellent grasp of research methodology and statistics and has published papers on multiple topics, including human aggression and behavior related to the natural environment. As Project Coordinator for the University of Minnesota Volunteerism Project at the Institute, he is responsible for the analysis of the data and reporting of results. He earned a PhD in 1999 from the University of Missouri.

    Mark W. Lipsey is Director of the Center for Evaluation Research and Methodology and Senior Research Associate at the Vanderbilt Institute for Public Policy Studies. His professional interests are in the areas of program evaluation research, social intervention, field research methodology, and research synthesis (meta-analysis). The topics of his recent research have been risk and intervention for juvenile delinquency and substance use, early childhood education programs, and issues of methodological quality in program evaluation research. He is a recipient of awards from the American Evaluation Association, the Society of Prevention Research, and the Campbell Collaboration, a Fellow of the American Psychological Society, and coauthor of the program evaluation textbook, Evaluation: A Systematic Approach, and the meta-analysis primer, Practical Meta-Analysis.

    Julia Littell, PhD, is a professor at the Graduate School of Social Work and Social Research, at Bryn Mawr College, where she has taught since 1994. She was Research Director for the National Family Resource Coalition, a Senior Research Fellow at the Chapin Hall Center for Children, and a lecturer at the School of Social Service Administration at the University of Chicago. She is coauthor of Systematic Reviews and Meta-Analysis, Putting Families First: An Experiment in Family Preservation, and numerous articles and chapters on research and evaluation methods, research synthesis, and child welfare services. She is a member of the editorial boards of Children and Youth Services Review and the Journal on Social Work Education. She has served as adviser on research and evaluation projects for community-based and governmental agencies at all levels and for independent foundations. She currently serves as Editor and Cochair of the International Campbell Collaboration (C2) Social Welfare Coordinating Group and is a member of the C2 Steering Group. She is a 2006 recipient of the Pro Humanitate Literary Award presented by the Center for Child Welfare Policy of the North American Resource Center for Child Welfare to authors “who exemplify the intellectual integrity and moral courage required to transcend political and social barriers to champion ‘best practice’ in the field of child welfare.” She earned her undergraduate degree from the University of Washington and her MSW and PhD from the University of Chicago.

    Thomas W. Mangione is senior research scientist at John Snow, Inc., in Boston, Massachusetts, and is Director of its Survey Research Facility. During his graduate training he worked at the University of Michigan's Survey Research Center, one of the world's premier survey research facilities. He has had more than 35 years of survey research experience using in-person, telephone, and self-administered data collection modes. He has published several articles and two books on survey research methodology. He also has been teaching survey research methodology at both the Boston University and Harvard University schools of public health since the mid- 1970s. He obtained his PhD in organizational psychology from the University of Michigan in 1973.

    Melvin M. Mark is Professor and Head of Psychology at the Pennsylvania State University. A past president of the American Evaluation Association, he has also served as editor of the American Journal of Evaluation where he is now Editor Emeritus. His interests include the theory, methodology, practice, and profession of program and policy evaluation. He has been involved in evaluations in a number of areas, including prevention programs, federal personnel policies, and various educational interventions including STEM program evaluation. Among his books are Evaluation: An Integrated Framework for Understanding, Guiding, and Improving Policies and Programs (with Gary Henry and George Julnes) and the recent SAGEHandbook of Evaluation (with Ian Shaw and Jennifer Greene), as well as two new books Evaluation in Action: Interviews With Expert Evaluators (with Jody Fitzpatrick and Tina Christie) and What Counts as Credible Evidence in Applied Research and Contemporary Evaluation (with Stewart Donaldson and Tina Christie, Sage) and the forthcoming Social Psychology and Evaluation (with Stewart Donaldson and Bernadette Campbell).

    Joseph A. Maxwell is Professor in the College of Education and Human Development at George Mason University, where he teaches courses on research design and methods. He is the author of Qualitative Research Design: An Interactive Approach (2005, Sage), as well as papers on qualitative methodology, mixed methods research, sociocultural theory, and medical education. He has also worked extensively in applied settings. He has given seminars and workshops on teaching qualitative research methods and on using qualitative methods in various applied fields, and has been an invited speaker at conferences and universities in the United States, Puerto Rico, Europe, and China. He has a PhD in anthropology from the University of Chicago.

    Erika A. Patall is a PhD candidate in social psychology in the Department of Psychology and Neuroscience, Duke University, Durham, North Carolina. She is currently a fellow in the Program for Advanced Research in the Social Sciences. Her research interests include the role of choice in the development of interest, motivation, and academic achievement and how the activities of children outside school influence their academic achievement, including how parents' involvement in homework may affect academic achievement. She is also interested in the development and use of meta-analytic methods in social science research.

    Charles S. Reichardt is a professor of psychology at the University of Denver, where he has been since 1978 and where he most likely will remain until he retires. His writing concerns research methods, statistics, and program evaluation, most often with a focus on the logic of assessing cause and effect. He has published three volumes (two of which concern the interplay between qualitative and quantitative methods) all coedited with Tom Cook, Sharon Rallis, or Will Shadish. He's a methodological consultant on a variety of program evaluations and gives workshops on statistics and research design. He has served on the board of directors of the American Evaluation Association, is a fellow of the American Psychological Association, an elected member of the Society for Multivariate Experimental Psychology, and received the Perloff award from the American Evaluation Society and the Tanaka award from the Society for Multivariate Experimental Psychology.

    Dennis W. Rook is a professor of clinical marketing at the Marshall School of Business. He received his PhD in marketing in 1983 from Northwestern University's Kellogg Graduate School of Management, where he concentrated in consumer behavior theory and qualitative research methods. Following his PhD, he served on the marketing faculty of the University of Southern California (USC) in Los Angeles. He left the academic environment in 1987 to join the strategic planning department of DDB Needham Worldwide in Chicago where he was a research supervisor. Following this, he was appointed director of Qualitative Research Services at Conway/Milliken & Associates, a Chicago research and consulting company. He rejoined the USC marketing faculty in 1991. His published research has investigated consumer impulse buying, “solo” consumption behavior, and consumers' buying rituals and fantasies. These and other studies have appeared in the Journal of Consumer Research, Advances in Consumer Research, Symbolic Consumer Behavior, and Research in Consumer Behavior. He has served as treasurer of the Association for Consumer Research, for which he is also a member of the Advisory Council. In 1985, his dissertation research was awarded by the Association for Consumer Research, and in 1988, he was appointed to the editorial board of the Journal of Consumer Research. He has served as a research and marketing consultant for companies in the consumer packaged goods, financial services, communications, and entertainment industries.

    Prem N. Shamdasani is Associate Professor of Marketing; Vice Dean, Executive Education; Academic Director, Asia-Pacific Executive (APEX) MBA Program; Codirector, Stanford-NUS International Management Program at the NUS Business School, National University of Singapore. His research and teaching interests include brand management, new product marketing, retail strategy, relationship marketing, and cross-cultural consumer behavior. He has taught in the United States and internationally and has received numerous commendations and awards for teaching excellence. Apart from teaching graduate and executive MBA courses, he is very active in executive development and training and consulting for numerous national and international corporate and governmental clients such as Caterpillar, Microsoft, DuPont, IBM, UPS, Siemens, Daimler, Alcatel-Lucent, L'Oreal, Danone, Philips, Roche, Singapore Airlines, Singapore Tourism Board, USDA, Nokia and Samsung. He has coauthored three books, including Focus Groups: Theory and Practice, for Sage. He is also actively involved in focus group research for consumer products companies and social marketing programs. He holds a BBA degree with first class honors from the National University of Singapore and received his PhD from the University of Southern California, Los Angeles.

    Joan E. Sieber, a psychologist, Professor Emerita, California State University, East Bay has specialized in empirical research on questions of scientific ethics, culturally sensitive methods of research and intervention, data sharing methodology, and scholarship on ethical problem solving. In 2001 to 2002, she was Acting Director of the National Science Foundation program Societal Dimensions of Engineering, Science and Technology. She is currently the Editor-in-Chief of the Journal of Empirical Research on Human Research Ethics (JERHRE), an international journal published by University of California Press in print and online, and is a research associate at the Center for Public Policy, University of Houston. She is the author of eight books and numerous other publications, including software and encyclopedia entries on ethical problem solving in social and behavioral research. She has served on seven institutional review boards (IRBs), of which she has chaired four, and has assisted many IRBs, including those in federal agencies (the Bureau of Justice Statistics and the Bureau of Prisons), those in private corporations (Interval Research Corporation, the University Corporation of Atmospheric Research), and various academic institutions in the development of their policies and procedures. She has served on the Accreditation Council of the Association for the Accreditation of Human Research Protection Programs (AAHRPP).

    David W. Stewart is Dean of the A. Gary Anderson Graduate School of Management at the University of California, Riverside. He is a past editor of the Journal of Marketing and is the current editor of the Journal of the Academy of Marketing Science. He has authored or coauthored more than 200 publications and 7 books. He received his PhD and MA in psychology from Baylor University and his BA in psychology from the University of Louisiana at Monroe.

    Abbas Tashakkori is Professor of Research and Evaluation Methodology and Associate Dean for Research and Graduate Studies in the College of Education of Florida International University. He has published extensively in national and international journals and has coauthored or coedited three books. He has a rich history of research, program evaluation, and writing on minority and gender issues, utilization of integrated methods of research, and teacher efficacy and job satisfaction. He is a founding coeditor of the Journal of Mixed Methods Research. His latest work in press is a book with Charles Teddlie titled Foundation of Mixed Methods Research: Integrating Quantitative and Qualitative Techniques in the Social and Behavioral Sciences (Sage, expected 2009).

    Charles Teddlie is the Jo Ellen Levy Yates Professor (Emeritus) in the College of Education at Louisiana State University. He is the author of 12 books and numerous chapters and articles on research methods and school/teacher effectiveness. These include The Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Techniques in the Social and Behavioral Sciences (with Abbas Tashakkori, 2009), The Handbook of School Effectiveness Research (with David Reynolds, 2000), and Schools Make a Difference: Lessons Learned from a Ten-Year Study of School Effects (with Sam Stringfield, 1993).

    William M. Trochim is Professor of Policy Analysis and Management at Cornell University and is the Director of Evaluation for the Weill Cornell Clinical and Translational Science Center, Director of Evaluation for Extension and Outreach, and Director of the Cornell Office for Research on Evaluation. He is currently President of the American Evaluation Association. His research is broadly in the area of applied social research methodology, with an emphasis on program planning and evaluation methods. In his career, he developed quasi-experimental alternatives to randomized experimental designs, including the regression discontinuity and regression point displacement designs. He created a structured conceptual modeling approach that integrates participatory group process with multivariate statistical methods to generate conceptual maps and models useful for theory development, planning, and evaluation. He has been conducting research with the National Institutes of Health and the National Science Foundation on the use of systems theory and methods in evaluation. He has published widely in the areas of applied research methods and evaluation and is well-known for his textbook, The Research Methods Knowledge Base, and for his Web site on social research methods. He received his PhD from the Department of Psychology at Northwestern University in methodology and evaluation research.

    Herbert M. Turner III is President and Principal Scientist of ANALYTICA, a small for-profit company that specializes in the application of rigorous research methods, including randomized field trials and systematic reviews with meta-analysis. ANALYTICA is a founding partner in the Institute for Education Sciences (IES) Regional Educational Laboratory for the Mid-Atlantic Region of the United States where his company provides technical managerial oversight of two large-scale cluster randomized field trials on Odyssey Math and Connected Math 2 curricula. ANALYTICA is also leading the development of the What Works Clearinghouse's randomized controlled trial registry of educational interventions. While leading ANALYTICA, he is an adjunct assistant professor at the University of Pennsylvania's Graduate School of Education where he teaches statistical programming, quantitative research methods, and an advance seminar on randomized controlled trials with Robert F. Boruch. He also serves as an Advisory Group member of the Campbell Collaboration Education Coordinating Group and is a coauthor on a Campbell Collaboration systematic review that examined the effect of parent involvement on elementary school children's academic achievement.

    Janet H. Van Ness, MSPH, is a community health educator with extensive experience providing direct service, technical assistance, health education materials development, and evaluation services in community settings. She has been at John Snow, Inc., for 14 years and her work has focused on developing new approaches for tobacco treatment services. In particular, she has worked on developing improved approaches to disseminating prevention and treatment information such as creating culturally appropriate print materials and developing individualized prevention and treatment messages through Web-based applications. In many of these endeavors, evaluation studies have played an important role to demonstrate the effectiveness of these new approaches.

    David Weisburd is Walter E. Meyer Professor of Law and Criminal Justice and Director of the Institute of Criminology at the Hebrew University, Israel, and Distinguished Professor of Administration of Justice at George Mason University, Virginia. He is an elected Fellow of the American Society of Criminology and of the Academy of Experimental Criminology. He is also Cochair of the steering committee of the Campbell Crime and Justice Group, and a member of the National Research Council Committee on Crime, Law and Justice. He is author or editor of 14 books and more than 70 scientific articles that cover a wide range of criminal justice research topics, including crime at place, violent crime, white collar crime, policing, illicit markets, criminal justice statistics, and social deviance. He is editor of the Journal of Experimental Criminology.

    Robert K. Yin is President and CEO of COSMOS Corporation, an applied research and social science firm operating since 1980. At COSMOS, he leads various research projects using qualitative-quantitative (mixed methods) research. He has authored more than 100 books and peer-reviewed articles. The fourth edition of his well-received book Case Study Research: Design and Methods was recently completed, and earlier editions have been translated into six languages. He also has authored Applications of Case Study Research (2003) and edited two readers, The Case StudyAnthology (2004) and The World of Education (2005). In 1998, he founded the “Robert K. Yin Fund” at MIT, which supports seminars on brain sciences as well as other activities related to the advancement of predoctoral students. He has a BA from Harvard College (magna cum laude) and a PhD from MIT (brain and cognitive sciences).


    • Loading...
Back to Top