Quantitative Research for the Qualitative Researcher

Books

Laura M. O'Dwyer & James A. Bernauer

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • Copyright

    View Copyright Page

    Preface

    Thank you for choosing to read Quantitative Research for the Qualitative Researcher! Our intention is to provide an introduction to quantitative research methods in the social sciences and education especially for those who have been trained in, or are currently learning, qualitative methods. This book might also be a useful supplement for courses in quantitative methods at the upper undergraduate or graduate levels. There are two important features of this book worth noting. The “Guidelines for Evaluating Research Reports” found in the Appendix can be used to extend understanding of any quantitative article by providing an organized way to examine essential features that are described in this book. In addition, we reference several quantitative articles on a companion website in appropriate sections of the book and allow readers to make immediate connections to the ideas being discussed. Using these articles in conjunction with the Guidelines and relevant sections of the text can provide an even deeper understanding of important ideas and concepts.

    Purpose of This Book

    Why should qualitative researchers want to learn more about the quantitative tradition? This question has been foremost in our minds as we developed the original idea for this book and as we formulated each chapter. We came to the realization early on that we view research in the social sciences as an exciting quest for discovery using disciplined inquiry, regardless of whether this inquiry is located in the qualitative or quantitative tradition. Our primary goal therefore in writing this book is to promote understanding and appreciation of the quantitative tradition in the social sciences especially for those who are most familiar with the qualitative tradition. By expanding their knowledge, skills, and appreciation of the quantitative tradition, we hope that our readers will acquire an enhanced repertoire of tools for reading, evaluating, and conducting research. In addition, we hope that our readers will develop an appetite for collaborating with colleagues who pose interesting research questions that can be addressed across traditions.

    While an increasing number of books present a balanced approach to quantitative, qualitative, and mixed methods (e.g., Gay, Mills, & Airasian, 2009; Johnson & Christensen, 2000; Springer, 2010), we think that our contribution is a digestible book that concisely conveys the fundamental concepts and skills underlying quantitative methods by identifying the commonalities that exist between the quantitative and qualitative traditions. These concepts, skills, and commonalities can then be used as a springboard for further learning in both traditions.

    While this book is intended primarily for those practicing or aspiring researchers who are predisposed to use qualitative methods, it is intended neither to try to convert qualitative researchers to the quantitative tradition nor to make “mixed methods” researchers out of them. Rather, its central aim is to promote understanding and appreciation of the two traditions based on the fact that the complexity inherent in both people and phenomena are consistent with possessing such understanding and appreciation. It is with complete agreement with the sentiment attributed to Albert Einstein “many of the things you can count, don't count. Many of the things you can't count really count” that we undertook this book.

    This book grew out of our experience in teaching research design, analysis, and statistics to undergraduate and graduate students in the fields of social science and education. Whereas James teaches qualitative research methods at Robert Morris University in Pittsburgh, Laura teaches quantitative methods at Boston College. Many of our doctoral students gravitate toward qualitative dissertations, and while this may reflect their true aspirations and predispositions, we think that this choice may sometimes be based on a sense of foreboding toward anything connected to that dreaded 10-letter word—statistics!

    While this book is not intended to convince individuals to switch to a quantitative mind-set, it is intended to demonstrate that all research traditions (quantitative, qualitative, mixed) share the common goal of trying to discover new knowledge by using a systematic approach. By providing a clear description of concepts underlying quantitative methodology, we hope to promote openness to this tradition and the recognition that the appropriateness of using a particular method depends on the questions asked and not a “posture” that has come to characterize practitioners in each tradition (Guba, 1981). While some of us tend to ask questions that can best be answered by “crunching numbers,” others ask questions that require “crunching words.” However, no matter what kind of crunching one may do, it is first necessary to have quality data that have been collected in a systematic and reflective manner. To put it another way, we hope that readers come to recognize that qualitative and quantitative approaches are not intrinsically antagonistic; in fact, we hope that by the time you have read this book that you understand why the two traditions are complementary.

    We also want to demonstrate that good quantitative research is not primarily about statistics but rather about problem quality, design quality, evidence quality, and procedural quality. These criteria apply equally to qualitative research. Unfortunately, we have found that while most faculty consider the “paradigm wars” a thing of the past, mistrust or at least misunderstanding still exists in the ranks. Thus, another goal of this book is to promote a greater willingness to integrate quantitative and qualitative approaches in teaching, learning, and research.

    The Authors

    A little bit about ourselves: James's background is rather eclectic; he taught in secondary education, special education, and, for a time, worked in banking and non-profit administration. Though initially trained in quantitative methods, James transitioned to the qualitative tradition several years ago and now considers himself primarily a qualitative researcher. At Robert Morris University, James teaches research methodology and educational psychology and his research interests include both K–12 education and higher education. Laura has spent most of her career in the academy. Although originally trained in applied physics, electronics, and geophysics, she made the switch to educational research more than 15 years ago. At Boston College, Laura teaches quantitative research methods and statistics, and her research focuses on the impact of school-based interventions on student and teacher outcomes, often technology based. We think that this strange admixture of backgrounds has helped us produce a book that will engage readers of a qualitative bent who are nonetheless open to make an overture to the “other side”. While this openness may be driven by a variety of motives, such as to fulfill a university requirement, supplement a qualitative course, increase self-efficacy, or simply satisfy curiosity, we have tried to write a book that readers will come to consider well worth their investment in time and treasure.

    Intended Audience

    We would now like to be a little more explicit in regard to the content and intended audience for this book. As described earlier, our aim is to promote an appreciation and increased understanding of the fundamental structure and aims of quantitative methods primarily for readers who may have little or dated background in this area. Although “appreciation” is attitudinal and “understanding” is cognitive, we think that these two desired outcomes are intrinsically connected. If one learns a bit more about quantitative methods but still considers its practice akin to voodoo, then what have we gained? Rather, we hope that qualitative readers leave this sojourn feeling even better about their own preferred methodological leanings, with an appreciation that some problems of interest are more amenable to quantitative methods and that quantitative methods can complement qualitative perspectives in ways that they may not have envisioned previously.

    Like most authors, we would like to think that people the world over will find this book so intriguing that they may take it to remote idyllic beaches to find pearls of wisdom; however, we have reluctantly accepted the fact that perhaps this vision may be a bit grand. We do think, however, that graduate students and upper-level undergraduate students will find that this book reinforces and perhaps expands what they are learning while offering additional insights. In addition, although this book could be used as a stand-alone text for an introductory course in quantitative research methods, it would probably be more useful as a complementary text to help build a bridge between the qualitative and quantitative traditions.

    Organization of This Book

    This text is divided into 11 chapters, divided over four sections. We have interspersed definitions of key terms and concepts in all chapters and include a glossary at the end of each chapter. Starting in Chapter 3, we include a section at the end of each chapter that refers to published articles as examples of how quantitative research is conducted, described, and interpreted. In total, six published articles are described, two or three at the end of each chapter. We hope that these real-world research examples will help readers break down the components of quantitative research and galvanize their understanding of the concepts and methods covered in this text. An overview of the six articles is provided in Table P.1, and the complete published versions are available at http://www.sagepub.com/odwyer. Also, to complement the text, we provide an appendix that contains guidelines for evaluating research reports (e.g., journal articles, dissertations, etc.). Although these guidelines are “slanted” toward quantitative research, we show parallels with qualitative research that are consistent with the theme of this book. At the end of each chapter, we have included discussion questions that have been designed not so much to arrive at precise answers but rather to promote creative thinking about the linkages between quantitative and qualitative research traditions.

    Table P.1 Summary of the Research Articles Referred to in This Text

    Section I, titled “Research in the Social Sciences: Qualitative Meets Quantitative,” comprises Chapters 1 to 3. In these chapters, we provide an advance organizer in the form of a description of research in general—including its aims and methods. Our intent here is to first provide information about what constitutes “quality research.” Because it is our contention that there is a fundamental unity underlying all research, we next discuss the unifying concepts of research that apply regardless of whether one is examining problems through a quantitative or qualitative lens. We also try to explicate the difference between “problem finding” and “problem solving” and how these differences also help traverse the qualitative–quantitative continuum. However, we also give some background about the “paradigm wars,” highlighting that while there is a fundamental unity between the traditions, differences remain, some of which are significant. We conclude Section Iby explicating the purposes, philosophical assumptions, methods, and further conceptions of the “quality” of qualitative research, followed by a similar treatment of quantitative research. The sequence of qualitative followed by quantitative discussion is not accidental, but rather, it is based on the assumption that most of our readers are qualitatively inclined. Our sequencing of the content is designed to help those readers transition from thinking qualitatively to thinking quantitatively. In Chapter 3, we begin to include sections called “Connections to Qualitative Research.” These sections are designed to help our readers gain a better understanding of the quantitative tradition by pointing out salient concepts, terminology, and perspectives that “connect” to the qualitative tradition.

    In Section II, titled “The Sine Qua Non for Conducting Research in the Quantitative Tradition,” we engage readers with the essentials of conducting research in the quantitative tradition. In Chapters 4 through 6, we provide an overview of the sampling and external validity, instrumentation and measurement, and internal validity, respectively. Our coverage of these topics is purposefully presented prior to our coverage of the most common quantitative research designs and data analysis procedures. We organized the text this way because we believe that our readers will be able to develop a more complete understanding of quantitative research designs if they understand the common principles that underlie all quantitative research. By continuing to make connections in Section II between the terms used in both traditions, we hope to prompt readers to recognize that they may already have a solid platform for understanding quantitative methods. The “Connections to Qualitative Research” found in each of these chapters were sometimes easy to formulate, but at other times, it made us realize that, while the two traditions are complementary and in pursuit of the same common goal, there are indeed important differences. In these sections, we pull together and reaffirm the complementary nature of quantitative and qualitative research by stressing their shared empirical and systematic components, as well as by celebrating their differences. In the end, we hope that readers will come to agree with our conclusion that these differences provide the basis for an even more powerful methodology.

    Section III, titled “Research Design and Data Analysis in the Quantitative Tradition,” comprises Chapters 7 through 10. Chapters 7 and 8 introduce readers to the most common non-experimental and experimental research designs used in the social sciences, respectively. In each chapter, we describe the essential characteristics of the design, the steps undertaken during implementation, the strengths and limitations of the design, as well as common threats to external and internal validity. In addition, we point readers to the pertinent sections of Chapter 9 that describe associated data analysis procedures and continue to include our “Connections to Qualitative Research” sections. Chapters 9 and 10 round out Section III and focus on descriptive and inferential analyses as the basic data analysis procedures used to analyze the data generated by quantitative research designs. As readers will see, the methods used to analyze qualitative and quantitative data are quite different. However, despite the differences, analyses in both traditions seek to make sense of the collected data using tried-and-tested analysis approaches. We hope that these chapters will encourage readers to develop an appreciation for the fact that the unifying goal of qualitative and quantitative research is to discover new knowledge that can help describe, explain, and predict the world around us.

    Section IV, the final section of this text, includes only Chapter 11. The purpose of this chapter is to provide readers with some final advice as to how to use a multiple mind-set to appreciate problems from both quantitative and qualitative perspectives.

    In an appendix at the end of the text, we provide readers with “Guidelines for Evaluating Research Summaries.” The purpose of this appendix is to provide our readers with step-by-step guidelines for evaluating research summaries (e.g., journal articles, dissertations, etc.). These guidelines have been “field tested” for several years in graduate research classes and are consistent with the terms and concepts we introduce in the preceding chapters.

    What This Text Is Not!

    Now that we have told you what this book is about, we need to also tell you what it is not about. It is not a book that seeks to provide a comprehensive treatment of either quantitative or qualitative methods; rather, it is meant to provide a unifying framework for understanding and appreciating both the underlying commonalities as well as the differences. As such, we refer readers to more specialized research design, measurement, and data analysis texts for additional coverage of some topics. For coverage of research methods in general, we recommend texts such as Creswell (2008), Mertler and Charles (2010), Gay, Mills, and Airasian (2009), McMillan and Schumacher (2006), Gall, Borg, and Gall (2003), and Fraenkel and Wallen (2011). For additional information about specialized research designs (e.g., survey research designs, true and quasi-experimental designs, etc.), we recommend texts such as Dillman, Smyth, and Christian (2008), Rea and Parker (2005), Fowler (2008), Groves, Fowler, Couper, and Lepkowski (2009), Fink (2012), Wright and Marsden (2010), Fowler (2008), Shadish, Cook, and Campbell (2002), Kirk (2012), and Keppel and Wickens (2004). For additional coverage of measurement issues, we recommend DeVellis (2011) and Netemeyer, Bearden, and Sharma (2003), and for additional coverage of statistics and data analysis, we recommend texts such as Privitera (2012), Howell (2010), Glass and Hopkins (1996), and Shavelson (1996).

    Because we are in agreement with a constructivist approach to learning, we do not see this text as the final destination but rather as a steppingstone toward developing a more advanced and nuanced understanding of quantitative research methods. Our ultimate hope is that, as a consequence of this book, readers develop a greater appreciation of both traditions and that this appreciation will spur increased understanding as well as collaboration.

  • Appendix: Guidelines for Evaluating Research Summaries

    Staying current with the literature in your field, being able to critically evaluate this literature, and possessing the skills to plan your own research are essential components of being a professional. Consequently, the purpose of this appendix is to provide guidelines to help readers independently evaluate research reports (journal articles, dissertations, etc.). Although these guidelines are “slanted” toward quantitative research, we show parallels with qualitative research, which is consistent with the theme of this book. These guidelines also include embedded descriptions of the sections that are linked to the content covered in Chapters 1 to 10 of the text. These guidelines have been “field tested” for several years in graduate research classes and are consistent with the 10 questions posed in Chapter 1 regarding quality research.

    Although the Guidelines for Evaluating Research Summaries were designed as an aid for evaluating any research article in the social sciences, they use the formal headings often used in dissertations. Typically, these sections are as follows:

    Abstract

    • Introduction
    • Review of Related Literature
    • Methods
    • Results
    • Conclusions and Implications

    References

    Although many published articles do not explicitly use these headings, they are very likely to contain all of the information that is presented under these headings, even if in abbreviated form.

    These guidelines are offered as a way to evaluate any research report that summarizes the findings from an empirical study (e.g., journal articles, dissertations, etc.). In preparation for reviewing or using these guidelines, it may also be helpful to refer to the quality question criteria in Chapter 1. As a first step in evaluating a research report, we suggest that you first read the article in its entirety. This will provide you with an overview of what the study is about, how it was conducted, and what was discovered. Subsequently, these evaluation guidelines can be used to evaluate the research summary. You may need to jump back and forth from the article to the guidelines in order to “fill in the blanks”!

    The guidelines include numbered questions to help you evaluate any type of research report. Follow these guidelines and answer each question since they will help to focus your work. In addition, the guidelines present a model or ideal format for published articles; however, because there is variation in how research is reported both within and between traditions, not all research summaries will conform exactly to these guidelines. Nevertheless, these guidelines have been shown to offer a reasonable way to evaluate published research. Readers should feel free to modify these guidelines as circumstances and context suggest! The following is a nine-step guide for reviewing and evaluating the sections of a research summary.

    Step 1: Identify the Research Summary (Journal Article, Dissertation, Etc.)

    Step 2: Review, Evaluate, and Summarize the Abstract

    The Abstract should introduce readers to the research purpose, the problem that is being addressed, the methods used, and should summarize the findings and conclusions.

    Evaluation Questions:

    • Does the Abstract describe the purpose, problem, methods, results, and conclusions? Yes or No? Describe.
    • Does the Abstract encourage you to read the rest of the article? Yes or No? Describe.
    Step 3: Review, Evaluate, and Summarize the Introduction Section (I)

    The Introduction serves as the next deeper level of information for readers. Although the terminology we introduced in Questions 1 to 5 (see Chapter 1) is standard in social science research, the authors of your research report may not use these same terms. In this case, you must do some detective work by reading more closely to find out if they have answered these questions. There is even the possibility that the authors did not answer some of these questions. Specifically, the Introduction section should include the following:

    • Purpose of the Study: Should state why the researchers conducted the study.
    • Problem Statement, Research Questions/Hypotheses: The problem that was addressed in the study should be stated, as should research questions and hypotheses. Researchers will either develop a testable guess as to what might happen (hypothesis) or specific questions to be answered (research questions). Whereas hypotheses are usually used with experimental or retrospective designs, research questions are often used with survey designs and most qualitative designs (although questions may “emerge” as the study progresses with qualitative designs).
    • Significance of the Problem: The significance of the study should be described, and it should be clear why solving the problem is important. Potential impacts on practice or policy may be described.
    • Background: The Introduction should provide a “mini-review of the literature” to provide some context for the study. An effective Introduction will demonstrate that the researchers “have done their homework” by reviewing what others have already found, and consequently, they identified a “niche” for their own study.

    Evaluation Questions:

    • Is a purpose given that describes why the researchers conducted the study? Yes or No? Describe.
    • Is a problem statement stated? Yes or No? Describe.
    • Is the significance of the problem stated? Yes or No? Describe.
    • Is the background of the problem described using key references? Yes or No? Describe.
    • Are the research hypotheses or research questions stated? Yes or No? Describe.
    Step 4: Review, Evaluate, and Summarize the Related Literature Section (II)

    The Review of Related Literature is an extension of the Background discussion in the Introduction section. Notice the adjective “related”— this means that only those sources that shed light on the current study should be selected for review. While the Review of Literature section is typically the longest section of a thesis or dissertation, it is sometimes quite short in journal articles due to space limitations. In fact it may not be labeled as the Review of Literature section but might simply be subsumed under the Background section. The following characteristics are present in an effective Related Literature section:

    • The purpose of the Review of Related Literature is to illuminate the study, and therefore, sources cited need to be related in more than a cursory way to the topic and purpose.
    • The Review of Related Literature not only should relate to the study in a general way, some sources in the Review of Related Literature should show in a specific way that the problem statement, research questions, or hypothesis are important and reasonable in light of what has already been discovered by others.
    • The Review of Related Literature should be written in a way that is not simply “He said” or “She said;” rather, it should flow, and sentences and thoughts should be connected by the researcher so that readers can learn from it—that is the beauty of reading a good study—you learn both from the researcher whose work you are currently reading as well as from researchers whose work he or she describes in the Review of Related Literature.
    • Note that it may be difficult to assess the quality of the Review, especially for qualitative studies where it is not always given the same prominence as with quantitative studies. Also, there should be a one-to-one correspondence between sources reviewed in the Review of the Related Literature and References cited at the end of the study. As you come across sources in the Review, a good practice is to check to make sure that it is listed accurately in the References and that there are no sources listed in the References that are not cited in the text of the article.

    Evaluation Questions:

    • Do the reviewed sources relate to the topic and purpose of the study? Yes or No? Describe.
    • Do the sources support the problem statement, hypothesis/research questions? Yes or No? Describe.
    • Is the Review written as a “good story” that informs the reader? Yes or No? Describe.
    Step 5: Review, Evaluate, and Summarize the Methods Section (III)

    The Methods section should provide a description of the participants, the instruments used to collect data, the research design and data analysis, and the operational procedures.

    • Description of the Participants: The report should describe the participants (“subjects” may be used in experimental research) in terms of characteristics that are relevant to the study. Note that in behavioral and social science research, researchers usually collect data from people, although records and artifacts are also used. Relevant characteristics sometimes include age, ethnicity, gender, years of teaching experience, and so on. You will need to decide if the author provided adequate information based on the nature of the study.
    • Instrumentation: Data are the essence of research, and there must be some instrument used to collect data. Even in qualitative research where the “researcher is the primary instrument,” there is usually an observation or interview protocol that is used. If there is no mention of data collection instruments, then answer “no”! Regarding validity and reliability, the authors will often indicate what kind of validity and reliability measures were used to evaluate instruments, and in the case of quantitative studies, numerical indices are often given, whereas “trustworthiness” and “triangulation” are often used in qualitative studies. If there is no mention of validity or reliability, the article is deficient in this important area!
    • The Research Design: The name of the research design is often explicitly stated in the article, but sometimes it is not. If it is not identified, you will need to try and discern the design that has been used. Sometimes researchers might use elements of more than a single design, especially in mixed methods studies. The authors should provide a thorough description of what analyses were conducted to answer the research questions or test the hypotheses.
    • Operational Procedures: The ideal procedures section should be written with enough clarity and detail so that other researchers (like you!) could replicate the study. If you do not think that other competent researchers could replicate the study given the information provided, then the article is deficient in this area.

    Evaluation Questions:

    • Are the participants described in terms of important characteristics? Yes or No? Describe.
    • What method of sample selection was used (random, purposive, etc.)? Yes or No? NA? Describe.
    • What was the size of the sample? Yes or No? NA? Describe.
    • Are the data collection instruments identified (tests, observation protocols, etc)? Yes or No? NA? Describe.
    • What information about validity and reliability is given? Yes or No? NA? Describe.
    • Is the research design identified (e.g., survey, case study, experimental, narrative)? Yes or No? Describe.
    • If a research design is not explicitly identified, what do you think it is and why? Yes or No? Describe.
    • Are data analysis procedures adequately described? Do they allow the research questions to be answered or the research hypotheses to be tested? Yes or No? NA? Describe.
    • For quantitative research reports, were descriptive statistics (means, standard deviations, etc.) or inferential statistics (t-test, ANOVA, ANCOVA, etc.) discussed? Yes or No? NA? Describe.
    • For qualitative research reports, was textual analysis used to analyze data (segmenting, coding, themes, etc.)? Yes or No? NA? Describe.
    • Do the authors provide an adequate description of how the study was conducted? Yes or No? NA? Describe.
    Step 6: Review, Evaluate, and Summarize the Results Section (IV)

    The Results section should provide a comprehensive summary of the findings from the study. The results should be tied back to the research questions or hypotheses. In the case of quantitative research reports, the authors should describe the empirical results in detail, including the significance levels (p-values) and the practical significance (e.g., effect sizes) of the results. For a qualitative research report, the authors should describe any textual analyses used to analyze data (segmenting, coding, themes, etc.).

    Evaluation Questions:

    • If research hypotheses were stated, do the results support them? Yes or No? NA? If research hypotheses were stated, do the results support them? Describe.
    • If research questions were stated, do the results support them? Yes or No? NA? If research questions were stated, do the results support them? Describe.
    • In the case of quantitative research reports, were any descriptive statistics (means, standard deviations, etc.) reported? Yes or No? NA? Describe.
    • In the case of quantitative research reports, were any inferential statistics (t-test, ANOVA, ANCOVA, etc.) reported? Yes or No? NA? Describe.
    • In the case of quantitative research reports, were significance levels (p-values) reported? Yes or No? NA? Describe.
    • In the case of quantitative research reports, was practical significance (e.g., effect sizes) discussed? Yes or No? NA? Describe.
    • In the case of qualitative research reports, was textual analysis used to analyze data (segmenting, coding, themes, etc)? Yes or No? NA? Describe.
    Step 7: Review, Evaluate, and Summarize the Conclusions and Implications Section (V)

    Section V is sometimes called the Discussion or Interpretation section. Whatever the title used, this section should “wrap up” the study by relating and interpreting what has been found in the Results section to themes such as those described in the bullets above. Remember—data, facts, and results do not speak entirely for themselves, they must be interpreted to “make meaning.” In a quantitative study, this is the first opportunity for the researcher to speak because all the previous sections were probably written in the third person (“the researcher”). The Conclusions and Implications section give quantitative researchers “voice” or the right to now say what they think their study actually means! In a qualitative study, this section continues the opportunity for researchers to “voice” their views on what the study means, since “interpretation” is often embedded with “analysis” based on the inductive nature of qualitative research. The section should include the following:

    • Conclude/summarize based on the problem statement, research questions or research hypotheses.
    • Interpret the meaning of the results, the limitations of the study, and any plausible alternative explanations.
    • Integrate the results with the background information from the Introduction and Related Literature sections.
    • Theorize about connections between Results and existing theory. The beginnings of new theories may also be discussed.
    • Discuss the implications or impact of the results on policy and/or practice.
    • Suggest further research that may replicate and/or refine the theory.
    • Discuss any unexpected findings and the limitations of the study.

    Evaluation Questions:

    • Does the researcher use one or more of these themes to summarize the study? Yes or No? NA? Describe.
    • Are the authors' conclusions warranted given the participants, the instruments used to collect the data, the research design and data analysis, and the operational procedures? Yes or No? NA? Describe.
    • Are the limitations discussed complete? Yes or No? NA? Describe.
    Step 8: Review and Evaluate the References Section

    Every source cited in the body of the research report (primarily in the Introduction and Related Literature sections) should be cited in proper format (e.g., American Psychological Association [APA]) in the References section. There should also be no sources in the references that are not cited in the research report. The references should be high quality (i.e., from reputable, peer-reviewed journals) and recent (older references are only acceptable when the works cited are seminal pieces). Sources should be authoritative, and there should be no sources from Wikipedia-like Internet resources.

    Evaluation Questions:

    • Are the sources authoritative and relevant? Yes or No?
    • Is every source cited in text listed properly in the References? Yes or No?
    • Is every source listed in the References cited properly in the text? Yes or No?
    Step 9: Overall Evaluation of the Research Report

    Taking all of the above criteria in mind, how would you rate this article?

    • Excellent: conforms to all or most of the criteria
    • Very Good: generally satisfies criteria but is somewhat lacking
    • Satisfactory: an acceptable article but could be improved
    • Unsatisfactory: significant shortcomings in terms of quality

    Provide a short summary statement that justifies your rating.

    References

    Allison, P. D. (2001). Missing data. Thousand Oaks, CA: Sage.
    Anderson, T., & Shattuck, J. (2013). Systematic review of design-based research progress: Is a little knowledge a dangerous thing?Educational Researcher, 42, 97–100. http://dx.doi.org/10.3102/0013189X12463781
    Bickel, R. (2007). Multilevel analysis for applied research: It's just regression. New York, NY: Guilford Press.
    Bloom, B., Engelhart, M., Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: Book 1. Cognitive domain. New York, NY: Longmans Green.
    Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54, 297–312. http://dx.doi.org/10.1037/h0040950
    Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago, IL: Rand McNally.
    Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Thousand Oaks, CA: Sage.
    Clandinin, D. J., & Connelly, F. M. (2000). Narrative inquiry. San Francisco, CA: Jossey Bass.
    Connolly, P. (2007). Quantitative data analysis in education: A critical introduction using SPSS. London, England: Routledge.
    Cook, T. D., & Campbell, D. T. (1976). The design and conduct of quasi-experiments and true experiments in field settings. In M.Dunnette (Ed.), Handbook of industrial and organizational psychology (pp. 228–293). Chicago, IL: Rand McNally.
    Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Boston, MA: Houghton Mifflin.
    Cook, T. D., & Shadish, W. R. (1994). Social experiments: Some developments over the past 15 years. Annual Review of Psychology, 45, 545–580. http://dx.doi.org/10.1146/annurev.ps.45.020194.002553
    Creswell, J. W. (1998). Qualitative enquiry and research design: Choosing among five traditions. London, England: Sage.
    Creswell, J. W. (2007). Qualitative inquiry & research design (
    2nd ed.
    ). Thousand Oaks, CA: Sage.
    Creswell, J. W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Upper Saddle River, NJ: Pearson.
    DeMars, C. (2010). Item response theory. New York, NY: Oxford University Press. http://dx.doi.org/10.1093/acprof:oso/9780195377033.001.0001
    DeVellis, R. F. (2011). Scale development: Theory and applications (
    3rd ed.
    ). Thousand Oaks, CA: Sage.
    Dillman, D., Smyth, J., & Christian, L. (2008). Internet, mail, and mixed-mode surveys: The tailored design method (
    3rd ed.
    ). Hoboken, NJ: Wiley.
    Enders, C. K. (2010). Applied missing data analysis. New York, NY: Guilford Press.
    Erickson, F. (1986). Qualitative methods in research on teaching. In M.Wittrock (Ed.), Handbook of research on teaching (pp. 119–161). New York, NY: Macmillan.
    Fabrigar, L. R., & Wegener, D. T. (2011). Understanding statistics: Exploratory factor analysis. New York, NY: Oxford University Press. http://dx.doi.org/10.1093/acprof:osobl/9780199734177.001.0001
    Fink, A. (2012). How to conduct surveys: A step-by-step guide (
    5th ed.
    ). Thousand Oaks, CA: Sage.
    Fowler, F. J. (2008). Survey research methods (
    4th ed.
    ). Thousand Oaks, CA: Sage.
    Fraenkel, J. R., & Wallen, N. E. (2011). How to design and evaluate research in education (
    6th ed.
    ). Boston, MA: McGraw-Hill.
    Gage, N. L. (1989). The paradigm wars and their aftermath. Educational Researcher, 18(7), 4–10.
    Gall, M. D., Borg, W. R., & Gall, J. P. (2003). Educational research: An introduction (
    7th ed.
    ). White Plains, NY: Longman.
    Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York, NY: Basic Books.
    Gay, L. R., & Airasian, P. (2003). Educational research: Competencies for analysis and application (
    7th ed.
    ). Upper Saddle River, NJ: Pearson.
    Gay, L. R., Mills, G. E., & Airasian, P. (2009). Educational research: Competencies for analysis and applications (
    9th ed.
    ). Upper Saddle River, NJ: Pearson.
    Geertz, C. (2003). Thick description: Toward an interpretive theory of culture. In Y. S.Lincoln & N. K.Denzin (Eds.), Turning points in qualitative research: Tying knots in a handkerchief (pp. 143–168). Walnut Creek, CA: AltaMira Press.
    Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge, England: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511790942
    Gibbs, G. R. (2007). Analyzing qualitative data. Thousand Oaks, CA: Sage.
    Glaser, B., & Strauss, A. (1967). The discovery of grounded theory. Chicago, IL: Aldine.
    Glass, G. V., & Hopkins, K. D. (1996). Statistical methods in psychology and education (
    3rd ed.
    ). Needham Heights, MA: Allyn & Bacon.
    Groves, R. M., Fowler, F. J., Couper, M. P., & Lepkowski, J. M. (2009). Survey methodology (Wiley Series in Survey Methodology). New York, NY: Wiley.
    Guba, E. G. (1981). Criteria for assessing the trustworthiness of naturalistic inquiries. Educational Communications and Technology Journal, 29(1), 75–91.
    Hambleton, R. K., & Swaminathan, H. (2010). Item response theory: Principles and applications. New York, NY: Springer.
    Heeringa, S. G., West, B. T., & Berglund, P. A. (2010). Applied survey data analysis. Boca Raton, FL: Chapman & Hall/CRC Press. http://dx.doi.org/10.1201/9781420080674
    Hogarth, R. B. (2005). The challenge of representativeness design in psychology and economics. Journal of Economic Methodology, 12, 253–263. http://dx.doi.org/10.1080/13501780500086172
    Howell, D. C. (2007). The analysis of missing data. In W.Outhwaite & S.Turner (Eds.), Handbook of social science methodology (pp. 87–114). Thousand Oaks, CA: Sage.
    Howell, D. C. (2010). Fundamental statistics for the behavioral sciences (
    7th ed.
    ). Belmont CA: Wadsworth Cengage Learning.
    Johnson, R. B., & Christensen, L. B. (2000). Educational research: Quantitative and qualitative approaches. Boston, MA: Allyn & Bacon.
    Johnson, R. B., & Onwuogbuzie, A. J. (2004). Mixed method research: A research paradigm whose time has come. Education Researcher, 33(7), 14–26. http://dx.doi.org/10.3102/0013189X033007014
    Karpov, Y. V., & Haywood, H. C. (1998). Two ways to elaborate Vygotsky's concept of mediation implications for instruction. American Psychologist, 53, 27–36. http://dx.doi.org/10.1037/0003-066X.53.1.27
    Keppel, G., & Wickens, T. D. (2004). Design and analysis: A researcher's handbook (
    4th ed.
    ). Prentice Hall.
    Kerlinger, F. N. (1973). Foundations of behavioral research (
    2nd ed.
    ). New York, NY: Holt, Rinehart & Winston.
    Kirk, R. E. (2012). Experimental design: Procedures for the behavioral sciences (
    4th ed.
    ). Thousand Oaks, CA: Sage.
    Krathwohl, D., Bloom, B., & Masia, B. (1964). Taxonomy of educational objectives: Handbook 2. Affective domain. New York, NY: David McKay.
    Lageman, E. (2000). An elusive science: The troubling history of education research. Chicago, IL: University of Chicago Press.
    Lichtman, M. (2013). Qualitative research in education: A user's guide (
    3rd ed.
    ). Thousand Oaks, CA: Sage.
    Marcia, J. E. (1991). Identity and self development. In R.Lerner, A.Peterson, & J.Brooks-Gunn (Eds.), Encyclopedia of adolescence (Vol. 1). New York, NY: Garland Press.
    Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard Educational Review, 62(3), 279–300. http://dx.doi.org/10.17763/haer.62.3.8323320856251826
    McMillan, J. H., & Schumacher, S. (2006). Research in education: Evidence-based inquiry (
    6th ed.
    ). Boston, MA: Pearson.
    Merriam, S. B. (1998). Qualitative research and case study applications in education. San Francisco, CA: Jossey-Bass.
    Mertler, C. A., & Charles, C. M. (2010). Introduction to educational research (
    7th ed.
    ). Boston, MA: Pearson.
    Messick, S. (1989). Validity. In R. L.Linn (Ed.), Educational measurement (
    3rd ed.
    , pp. 13–103). New York, NY: Macmillan.
    Messick, S. (1996a). Standards-based score interpretation: Establishing valid grounds for valid inferences. In Proceedings of the joint conference on standard setting for large scale assessments (Sponsored by National Assessment Governing Board and The National Center for Education Statistics). Washington, DC: Government Printing Office.
    Messick, S. (1996b). Validity of performance assessment. In G.Philips (Ed.), Technical issues in large-scale performance assessment (pp. 1–18). Washington, DC: National Center for Educational Statistics.
    Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis (
    2nd ed.
    ). Thousand Oaks, CA: Sage.
    Moss, P. A., Phillips, D. C., Erickson, F. D., Floden, R. E., Lather, P. A., & Scheider, B. L. (2009). Learning from our differences: A dialogue across perspectives on quality in education research. Educational Researcher, 38(7), 501–517. http://dx.doi.org/10.3102/0013189X09348351
    Moustakas, C. (1994). Phenomenological research methods. Thousand Oaks, CA: Sage.
    Mulaik, S. A. (2009). Foundations of factor analysis (
    2nd ed.
    ). Boca Raton, FL: CRC Press.
    Netemeyer, R. G., Bearden, W. O., & Sharma, S. (2003). Scale development in the social sciences: Issues and applications. Thousand Oaks, CA: Sage.
    Parlett, M., & Hamilton, D. (1976). Evaluation as illumination: A new approach to the study of innovatory programs. In G.Glass (Ed.), Evaluation studies review annual (Vol. 1, pp. 140–157). Beverly Hills, CA: Sage.
    Polanyi, M. (1958). Personal knowledge. New York, NY: Harper & Row.
    Privitera, G. J. (2012). Statistics for the behavioral sciences. Thousand Oaks, CA: Sage.
    Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (
    2nd ed.
    ). Newbury Park, CA: Sage.
    Rea, L. M., & Parker, R. A. (2005). Designing and conducting survey research: A comprehensive guide (
    3rd ed.
    ). San Francisco, CA: Jossey-Bass.
    Saldana, J. (2009). The coding manual for qualitative researchers. Thousand Oaks, CA: Sage.
    Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.
    Shavelson, R. J. (1996). Statistical reasoning for the behavioral sciences (
    3rd ed.
    ). Boston, MA: Allyn & Bacon.
    Shavelson, R. J., & Towne, L. (2002). Scientific research in education. Washington, DC: National Academies Press.
    Springer, K. (2010). Educational research: A contextual approach. Hoboken, NJ: Wiley.
    Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage.
    Stevens, S. S. (1951). Mathematics, measurement and psychophysics. In S. S.Stevens (Ed.), Handbook of experimental psychology (pp. 1–49). New York, NY: Wiley.
    Stevens, S. S. (1975). Psychophysics. New York, NY: Wiley.
    Strauss, A. L. (1987). Qualitative data analysis for social scientists. Cambridge, England: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511557842
    Strauss, A. L., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques (
    2nd ed.
    ). Newbury Park, CA: Sage.
    Taylor, F. W. (1947). Scientific management. New York, NY: Harper & Row.
    Thomas, D. R. (2006). A general inductive approach for analyzing qualitative evaluation data, 27(2), 237–246. doi:10.1177/1098214005283748
    Thompson, B. (2004). Exploratory and confirmatory factor analysis: Understanding concepts and applications. Washington, DC: American Psychological Association. http://dx.doi.org/10.1037/10694-000
    Tomei, L. A. (2005). Taxonomy for the technology domain. Hershey, PA: Information Science. http://dx.doi.org/10.4018/978-1-59140-524-5
    Tukey, J. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley.
    van Buuren, S. (2012). Flexible imputation of missing data. Boca Raton, FL: Chapman & Hall/CRC Press. http://dx.doi.org/10.1201/b11826
    van der Linden, W. J., & Hambleton, R. K. (Eds.). (1997). Handbook of modern item response theory. New York, NY: Springer. http://dx.doi.org/10.1007/978-1-4757-2691-6
    Van Kaam, A. (1969). Existential foundations of psychology. Pittsburgh, PA: Image Books and Duquesne University Press.
    Walkey, F. H., & Welch, G. (2010). Demystifying factor analysis: How it works and how to use it. Bloomington, IN: Xlibris.
    Wiersma, W., & Jurs, S. (2009). Research design in quantitative research. In Research methods in education: An introduction. Boston, MA: Pearson.
    Wolcott, H. F. (1988). Ethnographic research in education. In R. M.Jaeger (Ed.), Complementary methods for research in education (pp. 187–212). Washington, DC: American Educational Research Association.
    Wolcott, H. F. (1994). Transforming qualitative data. Thousand Oaks, CA: Sage.
    Woolfolk, A. (2011). Educational psychology (
    11th ed.
    ). Boston, MA: Pearson.
    Wright, P. V., & Marsden, J. D. (Eds.). (2010). Handbook of survey research. Bingley, England: Emerald.
    Yin, R. K. (2012). Case study research: Design and methods (
    3rd ed.
    ). Thousand Oaks, CA: Sage.
    Yow, V. R. (2005). Recording oral history. Walnut Creek, CA: AltaMira Press.

    About the Authors

    Laura M. O'Dwyer, PhD is an Associate Professor in the Department of Educational Research, Measurement and Evaluation in the Lynch School of Education at Boston College. She teaches quantitative research methods, introductory statistics, hierarchical linear modeling, survey research methods, and experimental design.

    James A. Bernauer, EdD is an Associate Professor in the School of Education and Social Sciences at Robert Morris University in Pittsburgh. He teaches qualitative research, educational psychology, and research methods.


    • Loading...
Back to Top