Conducting Educational Research: Guide to Completing a Major Project


Daniel J. Boudah

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • Copyright

    View Copyright Page


    Research is a broad term that usually means the systematic and rigorous process of posing a focused question, developing a hypothesis, testing the hypothesis or focus by collecting and analyzing relevant data, and drawing conclusions. Research methods vary, but the goal of research is almost always the same: to answer a question. The questions may range from “Does A cause B?” to “What is A like?” Educational research answers questions important to students, teachers, administrators, related service providers, and other stakeholders.

    In an effort to encourage research-based practice and data-based decision making, university-based teacher education programs, as well as some school-based professional development efforts, often require novice and experienced educators to conduct some form of research. For example, in university programs today, individuals who are working toward Master of Education, Master of Teaching, Master of Science in Administration, Doctor of Education, and similar degrees are required to complete a research project or thesis that involves identifying a relevant question, reviewing the literature base for information, developing a research or project plan, collecting and analyzing data, drawing conclusions, and reporting outcomes.


    Thus, this book is designed for two purposes. First, this book was written to enable university students (primarily graduate students) to engage thoughtfully in conducting educational research. Second, this book was written to help educators engage in research that leads to data-based decision making. There are already a number of textbooks on the topic of research in education, and though many are very thorough and helpful, they may be too comprehensive to give direct assistance to students during the process of conducting a thesis or major research project. There are also many “action research” books available, and while much of the current action research in the field of education may be of some value, it may not be sufficiently rigorous or allow educators to address many of the complexities that they face every day in schools. This book, therefore, will enable individuals in education or related fields to conduct relevant research in a systematic and rigorous way, research that will yield reliable and trustworthy outcomes that can inform practice or system issues.

    The book will assist educational researchers in the step-by-step implementation of a research project or thesis in or related to a school or clinical setting. It is organized and written to provide guidance for the following tasks:

    • Developing a research question
    • Searching and analyzing existing literature for the current state of knowledge on a topic
    • Developing a research plan
    • Collecting and analyzing data
    • Drawing conclusions
    • Sharing the conclusions with others

    The research designs addressed in the text are included the table below.

    Type of ResearchDesign
    Correlational research
    QualitativeCase study, grounded theory

    Consider this a guide to be used during the research process rather than to introduce students to research for the first time. After a student or educator has completed at least an introductory research course, become somewhat familiar with statistics, and is ready to begin a major project or thesis, this book begins. The chapters are organized to provide guidance throughout the entire research process, prompting researchers to stop and apply their understandings at each step of the process. Scaffolding is, therefore, provided in the form of questions, outlines, tables, and other supports for researchers to complete from the beginning of a project to its completion, particularly to link research questions to designs, then designs to data sources, and data sources to appropriate analyses. Understanding these parts of the research process, and their relationship to one another, typically makes the difference in whether a novice researcher can conduct thoughtful research that can be applied in education-related settings.

    Pedagogical Elements

    A number of pedagogical elements are included in this book to facilitate the research process.

    • Each chapter begins with an outline and chapter objectives and ends with a summary and discussion questions. Bolded words in the text are included in the Glossary.
    • The “Technology in Research” boxes provide concise information on a sampling of topics to help researchers take advantage of related technologies and online resources.
    • The “In Their Own Words” vignettes contain tips and suggestions from students who have completed projects.
    • A critical element at the end of each chapter is “Your Research Project in Action.” This portion of each chapter prompts students to apply what they have learned to their current research projects through a series of guiding questions or prompts. For example, in Chapter 5 on experimental and quasi-experimental designs, “Your Research Project in Action” requires students to address variables to be used, access to participants, timing of intervention, duration and location of intervention, and data sources. Practical, detailed guidance is essential to successful research, and this section helps provide a crucial linkage between knowing something about research and understanding how to carry out research in education.

    In sum, this guide will provide guidance through the entire research process, from developing and focusing research questions; to searching and analyzing the existing literature; to selecting the most appropriate research design, measurement, and method(s) of analysis; to interpretation and communication of outcomes. The intended audience includes teachers, administrators, counselors, psychologists, and other education-related service providers in master's-level university programs or experienced educators in school settings. It may well be that those are one and the same; that is, experienced educators working on master's degrees while continuing to teach full-time. Pilot-testing of most of the chapters in this text has proven very useful and provided excellent validation for the organization and presentation of information for the intended audience. I am confident that the text will be of great value to you and your students.


    This book would not have been possible without the early efforts and contributions of Dr. Peggy Weiss of Virginia Tech. Thank you, Peggy.

    In addition, I want to thank my wife and partner, Pamela. Thank you for your patience, support, and ideas regarding the usefulness of this book.

    I am grateful to the editors and staff at SAGE for your encouragement and invaluable assistance throughout the production process.

    Thank you, also, to the following reviewers for your thoughtful and valuable feedback on earlier drafts.

    • Gabriella Belli, PhD, Virginia Polytechnic Institute and State University
    • William Damion Bigos, PhD, Penn State University–Harrisburg
    • Gerard Giordano, PhD, University of North Florida
    • Dominic F. Gullo, Queens College, City University of New York
    • Ismail S. Gyagenda, Mercer University
    • Barbara Y. LaCost, University of Nebraska–Lincoln
    • John J. Matt, The University of Montana
    • Paige Tompkins, Mercer University
    • Finally, and most importantly:

      Not to us, O Lord, not to us

      But to your name be the glory,

      Because of your love and faithfulness.

      (Psalm 115:1)
  • Appendix A: Organizations That Support Educational Research

    There are many professional organizations and groups in education. The organizations that focus on research in education can provide valuable support and resources to beginning researchers. Some of these groups have members from all areas of education, and others serve smaller subsections of education professionals. This appendix contains information about some of these organizations, including each organization's mission statement and goals, its sources of support, how it supports researchers, where to find more information, and any type of funding these groups may provide. It was impossible to highlight all the groups that support educational research, so groups are highlighted based upon their commitment to research, the ease with which they are accessible, and the reputation they have attained in education.

    Alliance for International Educational and Cultural Exchange

    The Alliance for International Educational and Cultural Exchange was established in 1992 to promote federal policies that support and advance international exchange in all its dimensions. Representing 79 U.S.-based exchange organizations, the Alliance has established itself as the leading policy voice of the American exchange community. The Alliance formed through a merger of two predecessor organizations: the Liaison Group, which represented higher education associations, and the International Exchange Association (IEA), a coalition of citizen and youth exchange groups. (

    American Educational Research Association (AERA)

    The American Educational Research Association (AERA), founded in 1916, is concerned with improving the educational process by encouraging scholarly inquiry related to education and by promoting the dissemination and practical application of research results. AERA is the most prominent international professional organization with the primary goal of advancing educational research and its practical application. Its 22,000 members are educators; administrators; directors of research; persons working with testing or evaluation in federal, state, and local agencies; counselors; evaluators; graduate students; and behavioral scientists. The broad range of disciplines represented by the membership includes education, psychology, statistics, sociology, history, economics, philosophy, anthropology, and political science. (

    American Psychological Association (APA)

    Based in Washington, D.C., the American Psychological Association (APA) is a scientific and professional organization that represents psychology in the United States. With 150,000 members, APA is the largest association of psychologists worldwide. The goals of the American Psychological Association are to advance psychology as a science and profession and as a means of promoting health, education, and human welfare by

    • the encouragement of psychology in all its branches in the broadest and most liberal manner;
    • the promotion of research in psychology and the improvement of research methods and conditions;
    • the improvement of the qualifications and usefulness of psychologists through high standards of ethics, conduct, education, and achievement;
    • the establishment and maintenance of the highest standards of professional ethics and conduct of the members of the Association; and
    • the increase and diffusion of psychological knowledge through meetings, professional contacts, reports, papers, discussions, and publications, thereby to advance scientific interests and inquiry, and the application of research findings to the promotion of health, education, and the public welfare. (
    American Speech-Language-Hearing Association (ASHA)

    ASHA is the professional, scientific, and credentialing association for more than 123,000 members and affiliates who are speech-language pathologists; audiologists; and speech, language, and hearing scientists in the United States and internationally. The mission of the American Speech-Language-Hearing Association is to promote the interests of and provide the highest-quality services for professionals in audiology, speech-language pathology, and speech and hearing science and to advocate for people with communication disabilities. (

    Council for Exceptional Children-Division for Research (CEC-DR)

    The CEC Division for Research (CEC-DR) is the official division of the Council for Exceptional Children devoted to the advancement of research related to the education of individuals with disabilities and/or who are gifted. The goals of CEC-DR include the promotion of equal partnership with practitioners in designing, conducting, and interpreting research in special education. The Division for Research of the Council for Exceptional Children supports and encourages useful and sound research about children, youth, and adults with disabilities; their families; and the people who work with them. (

    International Reading Association (IRA)

    The International Reading Association (IRA) is a professional organization for those involved in teaching reading to students of all ages. According to its website, IRA's “focus has expanded to address a broad range of issues in literacy education worldwide.” One of the major areas of involvement for IRA is in supporting research and research dissemination. IRA has played an increasingly important role in advocating for research-based instruction in schools, most notably in promoting research-based instruction in reading during the development of the No Child Left Behind Act.

    IRA promotes research and publication of research on literacy and literacy issues “through a series of dedicated research awards, grants, and fellowships.” The following awards and grants might be of interest:

    The Teacher as Researcher Grant supports classroom teachers in their inquiries about literacy and instruction. Grants will be awarded up to $5,000, although priority will be given to smaller grants (e.g., $1,000 to $2,000) in order to provide support for as many teacher researchers as possible.

    The Elva Knight Research Grant provides up to $10,000 for research in reading and literacy. Contingent upon available funds in any given year, as many as four grants may be awarded. Projects should be completed within 2 years and may be carried out using any research method or approach as long as the focus of the project is on research in reading or literacy.

    The Jeanne S. Chall Research Fellowship is a $6,000 grant established to encourage and support reading research by promising scholars. The special emphasis of the Fellowship is to support research efforts in the following areas: beginning reading (theory, research, and practice that improves the effectiveness of learning to read); readability (methods of predicting the difficulty of texts); reading difficulty (diagnosis, treatment, and prevention); stages of reading development; the relation of vocabulary to reading; and diagnosing and teaching adults with limited reading ability.

    The Research and Studies Committee of IRA is dedicated to advancing research, as described in the following charges:

    • Determine issues that merit intensive study and make recommendations to the Board.
    • Offer leadership in research activities in cooperation with other committees.
    • Encourage the submittal of proposals for IRA conferences and publications to disseminate research findings, subject to the regular review process.


    National Association of School Psychologists (NASP)

    The National Association of School Psychologists (NASP) is a nonprofit association representing over 22,000 school psychologists from across the United States and other countries. The mission of NASP is to represent and support school psychology with leadership to enhance the mental health and educational competence of all children. Partnering with all who share our commitment to children and youth is critical to our mission. The website is a resource for members, parents, educators, and others interested in helping children and their families. (

    National Council for the Social Studies (NCSS)

    The National Council for the Social Studies (NCSS) is an organization for all professionals involved in teaching social studies. NCSS defines social studies as “the integrated study of the social sciences and humanities to promote civic competence.” On its website, NCSS describes social studies as being coordinated, systematic study drawing upon such disciplines as anthropology, archaeology, economics, geography, history, law, philosophy, political science, psychology, religion, and sociology, as well as appropriate content from the humanities, mathematics, and natural sciences. In essence, social studies promotes knowledge of and involvement in civic affairs. And because civic issues—such as health care, crime, and foreign policy—are multidisciplinary in nature, understanding these issues and developing resolutions to them require multidisciplinary education. These characteristics are the key defining aspects of social studies.

    With this in mind, NCSS provides support for educators in their quest to be better educators in many ways, including providing grants and awards for research. The NCSS and the Research Committee sponsor annual research awards for scholarly inquiry in the social studies. These awards are granted to inquiry in any area of social studies, and “research is broadly defined to include experimental, qualitative, historical, and philosophical work.”

    Two of the research awards of interest are these:

    • Larry Metcalf Exemplary Dissertation Award

      Frequency: Biennial, odd-numbered years

      Award: $250 commemorative gift, annual conference session for research presentation

      Purpose: The Larry Metcalf Exemplary Dissertation Award recognizes outstanding research completed in pursuit of the doctoral degree.

    • Exemplary Research Award

      Award: Commemorative gift, annual conference session for research presentation

      Purpose: The Exemplary Research in Social Studies Award acknowledges and encourages scholarly inquiry in significant issues and possibilities for social studies education.

    The NCSS also participates in numerous professional development activities, provides multiple publication outlets, and hosts an annual conference with many opportunities for presenting. (

    The National Council of Teachers of English (NCTE)

    The National Council of Teachers of English (NCTE) is an organization dedicated to improving the teaching and learning of English. The group boasts membership of over 60,000 in both the United States and other countries. The Council promotes the development of literacy and the use of language to construct personal and public worlds and to achieve full participation in society through the learning and teaching of English and the related arts and sciences of language.

    NCTE includes a Research Foundation that sponsors several grants and encourages the conduct and dissemination of high-quality research:

    The Cultivating New Voices Among Scholars of Color (CNV) program is intended to provide support, mentoring, and networking opportunities for early career scholars of color. The program aims to work with graduate students of color to cultivate their ability to draw from their own cultural/linguistic perspectives as they conceptualize, plan, conduct, and write their research. The program provides socialization into the research community and interaction with established scholars whose own work can be enriched by their engagement with new ideas and perspectives.

    NCTE provides many opportunities for publication and professional development at its website. (

    National Council of Teachers of Mathematics (NCTM)

    The National Council of Teachers of Mathematics (NCTM) is “a public voice of mathematics education, providing vision, leadership, and professional development to support teachers in ensuring mathematics learning of the highest quality for all students.” In addition to providing leadership in mathematics research, NCTM provides numerous grants and awards through the Mathematics Education Trust. (

    National Institute of Child Health and Human Development (NICHD), Child Development and Behavior (CDB) Branch

    The National Institute of Child Health and Human Development (NICHD), created by Congress in 1962, supports and conducts research on topics related to the health of children, adults, families, and populations. These health topics include the following:

    • Reducing infant deaths
    • Improving the health of women and men
    • Understanding reproductive health
    • Learning about growth and development
    • Examining problems of birth defects and mental retardation
    • Enhancing function and involvement across the life span through medical rehabilitation research

    The NICHD is part of the National Institutes of Health (NIH), the federal government's major medical research agency. NICHD research focuses on these ideas:

    • Events that happen prior to and throughout pregnancy as well as during childhood have a great impact on the health and well-being of adults.
    • Human growth and development is a lifelong process that has many phases and functions.
    • Learning about the reproductive health of men and women and educating people about reproductive practices is important to both individuals and societies.
    • Developing medical rehabilitation interventions can improve the health and well-being of people with disabilities.

    Within the NICHD is the Child Development and Behavior Branch (CDB). This branch is most well known for its recent reading research. However, CDB also supports research on psychological, psychobiological, and educational development from conception to maturity, focusing on the following program areas:

    • Social and affective development; child maltreatment and violence
    • Developmental cognitive psychology, behavioral neuroscience, and psychobiology
    • Behavioral pediatrics and health promotion research
    • Human learning and learning disabilities
    • Language, bilingual, and biliteracy development and disorders; adult, family, and adolescent literacy
    • Early learning and school readiness
    • Mathematics and science cognition and learning—development and disorders

    The Child Development and Behavior Branch also provides information on these programs and makes funding opportunities available through NICHD. (

    National Science Teachers Association

    The National Science Teachers Association (NSTA) supports and encourages teaching, learning, and innovation in the sciences. The mission of NSTA is “to promote excellence and innovation in science teaching and learning for all.” The NSTA supports research in several ways, including awards and grants to students, teachers, and principals. Many of these awards are specific to certain areas of science, such as rocketry or space. Following are examples of general awards. For more information about NSTA awards and recognition, see

    The Vernier Technology Awards recognize and reward the innovative use of data collection technology using a computer, graphing calculator, or other handheld in the science classroom. A total of seven awards will be presented:

    • One award at the elementary level (Grades K–5)
    • Two awards at the middle level (Grades 6–8)
    • Three awards at the high school level (Grades 9–12)
    • One award at the college level

    Each award will consist of $1,000 toward expenses to attend the NSTA National Convention, $1,000 in cash for the teacher, and $1,000 in Vernier products.

    The Delta Education/CPO Science Awards for Excellence in Inquiry-based Science Teaching recognize and honor three full-time PreK–12 teachers of science who successfully use inquiry-based science to enhance teaching and learning in their classroom.


    Appendix B: Using Microsoft Excel to Analyze Data

    Microsoft Excel has numerous add-in features that support statistical analysis. To access these features, you must load the Analysis ToolPak. Following are directions for using Excel 2003. More recent versions of Excel have different menu starting points for accessing the commands, but the process of using the statistical functions and their results are very similar.

    Loading the Analysis ToolPak

    Open the Tools menu in Excel. If Data Analysis appears near the bottom of the menu, the Analysis ToolPak is already loaded. If the menu is not visible, choose Add-Ins from the Tools menu.

    In the Add-Ins box, click the box next to Analysis ToolPak and click OK. The Data Analysis menu then should appear on the Tools menu. Depending on your Microsoft Office installation, you may be prompted for the Microsoft Office Installation CD to install this component.

    Descriptive Statistics

    The Descriptive Statistics tool generates simple descriptive statistics, including mean, median, and standard deviation for a data set. To compute these statistics, choose the Tools tab and Data Analysis. In the Data Analysis box, select Descriptive Statistics and specify the cells that contain your data in the Input Range box. Click the Summary Statistics checkbox in the lower left corner. By default, Excel generates the statistics on a new worksheet.


    The Histogram tool requires that a Bin Range or list of categories be specified. The Bin Range represents the categories for which you want frequency accounts. For example, the Bin Range might include all possible test scores or simply a range of scores as in the table below.

    Bin Range 1 might represent how many people scored 71, 72, 73, etc.; Bin Range 2 how many people scored 0–20, 20–40, 40–60, etc.

    Bin Range 1Bin Range 2

    To use Excel's Fill menu to help create a Bin Range for a histogram, start by entering the lowest possible data value in a cell. You can enter 0 or use the MIN() function to calculate the actual minimum. With that minimum cell still selected, choose Edit, then Fill Series.

    In the Series dialog box, select Columns for “Series in” and Linear for “Type.” Enter the appropriate Step and Stop values and click OK. The Step value specifies how much to increase each entry, and the Stop value indicates when to stop the series.

    Creating the Histogram. After you create the Bin Range, generate the actual histogram. Choose Tools and Data Analysis and select Histogram in the Data Analysis box. In the Histogram box, the Input Range is the actual data you want to summarize; for example, the list of all test scores from your Excel spreadsheet. The Bin Range is the range you created with the different categories. Click the Chart Output box in the bottom of the dialog box and click OK. Excel produces a frequency distribution and a chart on another worksheet.


    The Excel ANOVA analysis tools provide several options. You must first determine the number of factors and the number of samples you have from the populations you want to test. To access ANOVA, click on the Tools tab, Data Analysis, and then in the box select one of the ANOVA options.

    You will need to select the input and output options, which are the data corresponding to your independent and dependent variables, respectively. The following explanations are excerpted from the Microsoft Excel help function description:

    Anova: Single Factor. This tool performs a simple analysis of variance on data from two or more samples. The analysis provides a test of the hypothesis that each sample is drawn from the same underlying probability distribution against the alternative hypothesis that underlying probability distributions are not the same for all samples. If there are only two samples, the worksheet function, TTEST, can equally well be used. With more than two samples, there is no convenient generalization of TTEST, and the Single Factor Anova model should be called upon instead.

    Anova: Two-Factor With Replication. This analysis tool is useful when data can be classified along two dimensions. For example, in an experiment to measure the height of plants, the plants may be given different brands of fertilizer (for example, A, B, C) and might also be kept at different temperatures (for example, low and high). For each of the 6 possible pairs of {fertilizer, temperature}, one has an equal number of observations of plant height.

    ANOVA : Two-Factor Without Replication. This analysis tool is useful when data are classified on two dimensions as in ANOVA: Two-Factor With Replication. However, for this tool, one assumes that there is only a single observation for each pair; for example, for each {fertilizer, temperature} pair. Using this tool, one can apply the tests in steps 1 and 2 of the ANOVA: Two-Factor With Replication case, but one does not have enough data to apply the test in step 3.

    t Tests

    As described earlier in the chapter, sometimes the use of a simple t test may be appropriate. Microsoft Excel can quickly and easily help you with this analysis also. To access the t test analysis function, click on the Tools tab, Data Analysis, and then in the box, select one of the t test options. You will need to select the input and output options, which are the data corresponding to your independent and dependent variables, respectively.

    The following t test explanations are excerpted from the Microsoft Excel help function description:

    The Two-Sample t Test analysis tools test for equality of the population means underlying each sample. The three tools employ different assumptions: (a) the population variances are equal, (b) the population variances are not equal, and (c) the two samples represent before-treatment and after-treatment observations on the same subjects.

    For all three tools below, t is computed and shown as “t Stat” in the output tables. Depending on the data, t can be negative or nonnegative. Under the assumption of equal underlying population means, if t < 0, “P(T <= t) one-tail” gives the probability that a value of the t statistic would be observed that is more negative than t. If t ≥ 0, “P(T <= t) one-tail” gives the probability that a value of the t statistic would be observed that is more positive than t. “t Critical one-tail” gives the cutoff value so that the probability of observing a value of the t statistic greater than or equal to “t Critical one-tail” is alpha. “P(T <= t) two-tail” gives the probability that a value of the t statistic would be observed that is larger in absolute value than t. “t Critical two-tail” gives the cutoff value such that the probability of an observed t statistic larger in absolute value than “P Critical two-tail” is alpha.

    t-Test: Two-Sample Assuming Equal Variances. This analysis tool performs a two-sample t test. This t test form, referred to as a homoscedastic t test, assumes that the two data sets come from distributions with the same variances. You can use this t test to determine whether the two samples are likely to have come from distributions with equal population means.

    t-Test: Two-Sample Assuming Unequal Variances. This analysis tool performs a two-sample t test. This t test form, referred to as a heteroscedastic t test, assumes that the two data sets came from distributions with unequal variances. As with the equal variances t test above, you can use this t test to determine whether the two samples are likely to have come from distributions with equal population means. Use this test when there are distinct subjects in the two samples. Use the paired test, described below, when there is a single set of subjects and the two samples represent measurements for each subject before and after a treatment.

    The Excel worksheet function, TTEST, uses the calculated df value without rounding since it is possible to compute a value for TTEST with a noninteger df. Because of these different approaches to determining degrees of freedom, results of TTEST and this t test tool will differ in the unequal variances case.

    t-Test: Paired Two Sample for Means. You can use a paired test when there is a natural pairing of observations in the samples, such as when a sample group is tested twice—before and after an experiment. This analysis tool and its formula perform a paired two-sample t test to determine whether observations taken before a treatment and observations taken after a treatment are likely to have come from distributions with equal population means. This t test form does not assume that the variances of both populations are equal.


    When you conduct correlational studies, Microsoft Excel statistical functions also may help you with less complex analyses. To access this analysis function, click on the Tools tab, Data Analysis, and then in the box select Correlation. You will need to select the input and output options, which are the data corresponding to your variables.

    The following explanation is excerpted from the Microsoft Excel help function description:

    The CORREL and PEARSON worksheet functions both calculate the correlation coefficient between two measurement variables when measurements on each variable are observed for each of N subjects. (Any missing observation for any subject causes that subject to be ignored in the analysis.) The correlation analysis tool is particularly useful when there are more than two measurement variables for each of N subjects. It provides an output table, a correlation matrix, showing the value of CORREL (or PEARSON) applied to each possible pair of measurement variables.

    See Microsoft Excel Help for further explanations of Excel's statistical analysis capabilities.


    • A-B-A single subject design—a design in which there are three conditions: baseline, intervention, and return to baseline.
    • Abstract—a brief summary of the study.
    • Alternative causes—a threat to internal validity involving reasons for change in the dependent variable not due to effects of the independent variable.
    • Analysis of covariance (ANCOVA)—tests whether a variable other than the independent variable under study might account for some of the difference in, or covary with, the dependent variable.
    • Analysis of variance (ANOVA)—tests the null hypothesis that the means of more than two groups are equal.
    • Axial coding—making connections between a category and its subcategories.
    • Baseline—preliminary data regarding dependent variables that establishes current performance prior to the introduction of an independent variable or intervention.
    • Case sampling—determining who or what will be part of a research study.
    • Case study—a qualitative inquiry.
    • Categories—classification of similar data and/or patterns.
    • Categorical questions—require respondents to supply an answer from predetermined categories.
    • Causal comparative research—identifies potential cause-and-effect relationships between an independent variable (i.e., causal factor) and a dependent variable (i.e., effect factor) in targeted groups of individuals based on preexisting or extant data.
    • Chi-square analysis—compares the observed or actual frequency of observations or responses with an expected frequency of responses.
    • Closed, fixed-response interview—interview in which researcher has developed the questions and the alternative responses for the participant and the participant chooses the most suitable response.
    • Clustered sampling—sampling in which researcher chooses certain physical or geographical areas and identifies a certain number of units to be chosen from each area.
    • Code—a word, letter, number, or other symbol used in a code system to mark, represent, or identify something.
    • Coding—the transforming of a single data item into a more convenient alternative number or abbreviation.
    • Comparison group—the group in the study that does not receive the independent variable; also control group.
    • Comparison group design—a research design in which two groups matched by similar characteristics are pretested and posttested; however, only the experimental group receives the intervention.
    • Concept—the unit of analysis for making comparisons and asking questions.
    • Conceptual frameworks—the lens through which researchers view their study's purpose and outcomes.
    • Confirmability—whether a study's conclusions can be confirmed by outside observers or researchers.
    • Construct validity—the validity with which a study's results can make generalizations about constructs (i.e., data-based concepts or variables).
    • Control group—part of a sample in experimental research that does not receive the intervention or treatment.
    • Convenience sampling—sampling in which the researcher chooses the most efficient and convenient sample available.
    • Correlation coefficient—an index of the strength of relationships among variables.
    • Correlational research—describes or analyzes relationships between variables, conditions, or events, any of which may be reported attitudes, beliefs, or behaviors.
    • Credibility—the truth value of a descriptive study that uses qualitative methods.
    • Criterion-referenced measurement—performance of study participants on a measure is compared to an objective standard.
    • Critical case sampling—sampling in which the researcher chooses the situations or participants because of their uniqueness or how important they are to the issue.
    • Cronbach's alpha—a statistical formula used to determine reliability based on at least two parts of a test; requires only one administration of the measure.
    • Curriculum-based measurement—a set of standard simple, short-duration fluency measures of reading, spelling, written expression, and mathematics computation; developed to serve as a general outcome indicators measuring “vital signs” of student achievement in important areas of basic skills of literacy.
    • Degrees of freedom—numbers used in the statistical calculation of the level of significance that are approximately equal to the number of participants for which one has entered data.
    • Dependent variable—the variable that may change because of the independent variable; outcome, effect, or result of an intervention, measured in some way.
    • Descriptive research—research with the goal of describing a population or phenomenon without determining causality.
    • Dimensions—the locations of a property along a continuum.
    • Directional hypothesis—a hypothesis that implies a difference in a particular direction when two groups are compared or when a group is compared at two different points of time.
    • Discussion—interpretation of how a study's results are important to the field, how they fit into previous research, and what future research is necessary.
    • Effect size—the degree of difference between groups or conditions or the magnitude of difference in outcomes among experimental groups or experimental conditions.
    • Ethnography—a branch of anthropology dealing with the scientific description of individual cultures.
    • Event-based observations—involve identifying a behavior, counting the number of times it occurs, and keeping a tally.
    • Event-sampling observation—helpful for determining the exact number of times a behavior occurs during a predetermined period of time.
    • Experimental designs—typically include these key characteristics: (a) selection of experimental participants, (b) direct manipulation of an independent variable, (c) control of extraneous variables, and (d) measurement of outcomes.
    • Experimental group—the group in a study that receives the independent variable.
    • Experimental research—research with the goal of identifying cause-and-effect relationships.
    • Extant database—an existing database that includes data collected by others.
    • External reliability—the extent to which a researcher could replicate the study in other settings.
    • External validity—the extent to which an observed relationship among variables can be generalized beyond the conditions of the investigation to other populations, settings, and conditions.
    • Extreme sampling—researcher chooses cases that are extreme or dramatically different from the norm in some way; also called deviant sampling.
    • Factorial group design—a design in which more than one independent variable is analyzed in experimental research.
    • Fidelity—other researchers can do the same study similarly and to the same extent.
    • Fidelity of treatment implementation—ensuring that the treatment or independent variable is put into place adequately and equally by all involved in the study.
    • Field notes—include information about what a researcher has observed in the field.
    • Focus group—a group of individuals with characteristics similar to those under study who give ideas about concepts important to a study.
    • Formal observations—researchers identify and record (quantitatively) a discrete target behavior.
    • Frequency tables—provide researcher and the reader with information about how often a certain response occurred or the percentage of responses that this frequency indicates.
    • F statistic—statistical value, used in obtaining the level of significance of an analysis, is the ratio of between-groups variance to within-groups variance.
    • Generalizable—a study is generalizable if its results can be applied to the population.
    • Going native—the possibility that researcher gets too close to the situation or becomes biased in presentation because of the relationships developed. A study loses thus loses its credibility and becomes a statement of opinion that is not valued in the research community.
    • Grounded theory —over time, a grounded theory study works through the following, mostly overlapping phases: data collection, note taking, coding, and memoing.
    • Group differences—a threat to internal validity that occurs when the experimental and control groups are not equivalent in terms of important variables or characteristics.
    • Histograms—illustrate the relationship between two variables whose measures yield interval or ratio data or continuous scores.
    • History—a threat to internal validity that occurs when something unrelated to the independent variable impacts the dependent variable during the study.
    • Hypothesis—a brief statement expressing a possible answer to a research question.
    • Independent variable—the variable that is manipulated and controlled by the researcher in the hope of causing an effect; sometimes referred to as intervention or treatment.
    • Inferential statistics—used to reach conclusions that extend beyond the immediate data.
    • Informal conversational interview—interview that often occurs within the setting being observed; researcher engages participant in conversation about the situation and asks questions about specific events, interactions, or perceptions relevant to the situation.
    • Informal observations—researchers typically utilize anecdotal records and field notes that provide loosely structured, informal ways of recording observations.
    • Intensity sampling—researcher chooses settings or participants where the unit of study occurs most often.
    • Interaction effects—the interrelated effects of the two independent variables on the dependent variable.
    • Internal reliability—the degree to which the study would be carried out the same way if conducted under the same conditions.
    • Internal validity—the approximate validity with which we infer that a relationship between two variables is causal or absence of a relationship implies absence of cause.
    • Interobserver agreement—the degree to which two independent observers record observational data of the same situation similarly.
    • Interval data—are numeric and have equal intervals between values but do not have to contain a zero value.
    • Intervening conditions—identify context with moderating variables and intervening conditions with mediating variables in grounded theory
    • Interview guide approach—interview in which researcher has a general guide outlining types of questions to ask but specifics about wording and elaboration are not included.
    • Intraobserver agreement—the degree to which an observer records similar data about the same observation or test on two occasions.
    • Introduction—section of research report that places study in the context of questions in the field.
    • Levels—Different types of a variable, such as differing grade levels, when grade level is an independent variable.
    • Level of significance—the probability that the statistical differences in an analysis are due to chance or measurement error.
    • Likert scale—participants respond to statements with varying degrees of agreement or disagreement.
    • Linear numeric scale—items are be judged on a single dimension and arrayed on a scale with equal intervals.
    • Literature search—the process of searching existing research on a topic to understand the state of current research and determine one's own research focus.
    • Low statistical power—a low likelihood that a statistical test will find significant differences due, for example, to small sample size.
    • Main effects—the effects of each independent variable on or its relationship to the dependent variable.
    • Maturation—a threat to internal validity that occurs when participants’ knowledge, as measured by a dependent variable, changes due to getting older and gaining greater experience.
    • Measurement issues—a threat to internal validity that occurs when the frequency and practice of assessment and/or the assessment devices used affect the dependent variable.
    • Member checking—the participants in the study review the hypotheses, patterns, characteristics, analysis, interpretations, and conclusions of the researcher.
    • Memos—the most basic way to annotate data; as though working with small electronic sticky notes, the researcher can attach memos to all sorts of data bits.
    • Meta-analysis—Literature review that includes statistical analysis of results in reviewed research reports.
    • Method—section of a research report that describes the research process in detail.
    • Method notes—notes that identify and help defend methodological choices in a research study.
    • Multicase study—more than one case study occurs at the same time.
    • Multisite study—more than one site is used for the case study.
    • Multiple-baseline design—a design in which the intervention begins at different points in time for two or more participants.
    • Multiple regression—a statistical procedure that enables one to correlate multiple variables at a time.
    • Multivariate analysis of variance (MANOVA)—analysis of variance used when a study involves more than one dependent variable.
    • Negative case analysis—analysis of cases that show nonoccurrence of a phenomenon.
    • Nominal data—are arranged by unordered, categorical groups.
    • Nonresponse—occurs for at least three reasons: (a) participants are not given a chance to respond (i.e., not chosen), (b) participants are given a chance and refuse to participate, and (c) participants are given a chance to respond and cannot.
    • Norm-referenced measurement—performance of study participants on a measure is compared to the average performance of others.
    • Null hypothesis—a statement signifying that one expects no differences in outcomes or no relationships between the given variables in one's hypothesis.
    • Open coding—comparing one incident with another as research goes along so that similar phenomena can be given the same name.
    • Ordinal data—place responses in a certain order or rank, but this order does not have equal intervals between items.
    • Parallel forms reliability—the degree to which a person's score is similar when the person is given two forms of the same test.
    • Participant—a person from whom a researcher collects data.
    • Patterns—recurring items in the data.
    • Pearson product-moment coefficient— the Pearson r is used when examining relationships between variables that yield continuous scores (i.e., interval or ratio types of data).
    • Peer debriefing—reviewing data, data analysis, and interpretations with a peer who can provide feedback and question one's methods.
    • Persistent observation—conducting observations of phenomena until categories or patterns are saturated or complete; observing until new occurrences of phenomena are infrequent or nonexistent.
    • Phenomenon—the term for research problem in qualitative research; the topic one would like to address, investigate, or study.
    • Population—a group with identifying characteristics; the larger group of people to whom researchers wish to generalize, apply, or relate the results of their research.
    • Post hoc statistical test—analysis of data for patterns not defined before the research took place.
    • Posttest-only group design—a design in which participants are selected, introduced to the intervention, and then observed for some behavior or have their performance measured in some way of interest.
    • Pretest-posttest group design—a design in which participants are identified, pretested, given an intervention, and posttested through observation or some other measurement; the posttest performance can be compared to that measured by the pretest.
    • Primary sources—original literature pieces written by other authors that one cites in a research paper.
    • Prolonged engagement—conducting a study until evidence of the phenomenon studied is saturated or complete.
    • Properties—the characteristics or attributes of a category.
    • Qualitative methods—methods often used in descriptive research; researchers analyze language, written or oral, and actions to determine patterns, themes, or theories in order to provide insight into what is happening in certain situations.
    • Quantitative—research in which the researcher assigns numbers to variables or levels of variables being studied for purposes of statistical analysis.
    • Quantitative methods—methods often used in both experimental and quasi-experimental as well as descriptive research; involve assigning numbers to sequential levels of variables being studied for purposes of statistical analysis.
    • Quasi-experimental research—research in which random assignment is problematic or impossible but, like experimental research, attempts to determine if an independent variable has a direct impact on a dependent variable.
    • Random assignment of participants—participants in study have an equal chance of being selected.
    • Random sampling—researcher obtains a sample from a population in which every possible sample has the same chance of being selected for the study.
    • Ratio data—have the characteristics of interval data, but the equal intervals are also related by ratios.
    • Recruiting—process by which a researcher targets, informs, and secures the commitment of participants to be included in a research study.
    • Reliability—study occurs similarly across students and time.
    • Reliability coefficient—statistic indicating the relationship between multiple administrations, multiple items, or other analyses of evaluation measures.
    • Replication—the repetition of a study design with similar results.
    • Representative sample—a sample group that has characteristics similar to those of the population so that results of the experiment can be considered generalizable to the population.
    • Research—a broad term that usually means the systematic and rigorous process of posing a focused question, developing a hypothesis or focus, testing the hypothesis or focus by collecting and analyzing relevant data, and drawing conclusions.
    • Research hypothesis—a declarative statement of how one expects the research to turn out.
    • Research problem—the topic one would like to address, investigate, or study, whether descriptively or experimentally.
    • Research proposal—a written rationale and plan for conducting research, usually created by a researcher prior to conducting formal research.
    • Research question—a way of expressing one's interest in a problem or phenomenon.
    • Results—section of the research report that details the discoveries found in data collected.
    • Return rate—proportion of surveyed sample who returns responses.
    • Sample—participants included in the study.
    • Sample characteristics—a threat to external validity that involves a sample with characteristics different than those of the population to which the researcher wishes to generalize.
    • Sampling—the selection of participants for a study.
    • Scales—allow the researcher to organize and analyze data using statistics in order to compare responses across questions.
    • Scatter plot—a graph that includes plotted points representing correlation scores.
    • Secondary sources—sources that one cites that have already been cited by another author.
    • Selective coding—integration of all data by choosing a core category and relating each category to it.
    • Setting characteristics—a threat to external validity that includes resources and situations used by the researcher that are not present in the situation to which the research wishes to generalize.
    • Significance—the notion that differences between two groups or conditions are not due simply to chance.
    • Single-subject design—attempts to capture the effects of an intervention on individuals rather than pooling individual differences as in group designs; also known as single-case design.
    • Soloman four-group design—a design that controls for possible pretest sensitization and looks at possible interactions of pretest and experimental conditions by adding two additional groups that do not receive the pretest.
    • Spearman rho—Statistic typically used to evaluate the relationship between variables that are rank or rating scores (i.e., ordinal data) such as one might find on a survey item with a scale of strongly agree to strongly disagree.
    • Split-half reliability—the degree to which a study participant receives a similar score on one half of the test items as compared to the other half; requires only one administration of the measure.
    • Standardized, open-ended interview—interview in which researcher has predetermined the questions but the responses can vary by participant.
    • Statistical conclusion validity—the validity with which a study's conclusions about covariation are appropriate, given the statistical tests used.
    • Statistical power—the odds one will observe a treatment effect when one occurs.
    • Statistical test assumption—basic situations that must be in place for a statistical test to be appropriate.
    • Statistically significant—the change in a variable is greater than the predicted change due to chance.
    • Stratified sampling—sampling in which the researcher creates subgroups in order to guarantee their representation in a sample.
    • Structured responses—include options for answers to questions.
    • Subcategories—smaller divisions of data that share common characteristics within a larger category.
    • Survey research—research in which a researcher asks a sample group questions.
    • Symbolic interactionism—a conceptual framework consisting of three main premises: (1) human beings act toward things on the basis of the meanings that the things have for them, and such things include everything that the human being may note in this world; (2) the meaning of such things is derived from, or arises out
    • of, the social interaction that one has with one's fellows; and (3) these meanings are handled in, and modified through, an interpretative process one uses in dealing with the things one encounters.
    • Test-retest reliability—the degree to which a study participant achieves a similar score on an assessment measure when the entire measure is administered once and then is administered again.
    • Themes—common concepts in qualitative research that emerge from several categories of multiple data sources over the life of a qualitative study.
    • Theoretical sampling—sampling on the basis of concepts that have proven theoretical relevance to the evolving theory.
    • Theoretical saturation—point of research project when all or most new data fit into existing categories.
    • Theoretical sensitivity—the insight and understanding of the topic of interest that the researcher brings to the study; the experience, philosophical disposition, and conceptual framework that a researcher uses as a lens through which to analyze interview, observation, and document data.
    • Threats to validity—variables that could reduce the validity of a study.
    • Time-based observations—involve keeping track of the duration of a particular behavior.
    • Time-sampling observation—helpful for determining the extent or duration of a
    • behavior during a predetermined period of time.
    • Time-series group design—a design in which (1) a group of participants is observed or administered some form of measurement at multiple given intervals to establish a valid baseline effect, (2) an intervention is introduced, and (3) participants are observed or administered the same form of measurement at multiple given intervals to see if there has been a change in behavior or performance.
    • Tradition—scholarly approach whose conceptual underpinnings inform qualitative research; also called domain.
    • Treatment—the intervention; the conditions created by the researcher to produce a result.
    • Treatment characteristics—a threat to external validity that occurs when a treatment method or condition cannot be replicated with the population to which the researcher wishes to generalize.
    • Treatment fidelity—fidelity with which the treatment or intervention is implemented throughout the study; also called fidelity to implementation, implementation fidelity, and intervention fidelity.
    • Triangulate—verify conclusions through the use of multiple methods (e.g., observation and interview).
    • Trustworthiness—how a researcher can persuade readers that the study is worthwhile and credible.
    • t test. The t test is used to test the null hypothesis that the mean scores of two groups are the same.
    • Two-way ANOVA—an analysis of variance used when a study involves two independent variables.
    • Type II error—researcher fails to reject the null hypothesis even when it is false.
    • Typical case sampling—researcher examines the most often occurring situation or participant to give a representative description.
    • Unit of analysis—the focus of the data analysis in a research study, for example, whether it is on individual performance, class performance, or school performance in a study.
    • Unstructured responses—responses that are open-ended and require respondents
    • to give the answers that they feel are appropriate.
    • Validity—the best available approximation to the truth or falsity of propositions in a study.
    • Variables—the changeable parts of studies that researchers may want to manipulate; sometimes called factors.
    • Verbal frequency scale—the respondent is asked to rate how often something occurs along a continuum.
    • Working plan—a guide with which a researcher begins an inquiry that is relatively general in nature and is not prescriptive; also called working design or emergent design.

    About the Author

    Daniel J. Boudah is an Associate Professor and Director of Graduate Studies in the Department of Curriculum and Instruction at East Carolina University. Dr. Boudah previously taught general education and special education in public schools. He has been awarded federal and other grants and carried out school-based research in the areas of teacher planning and inquiry, learning strategies, content enhancement, systems change, and collaborative instruction. He has published work in professional journals, textbooks, newsletters, and teacher-training materials. Dr. Boudah has spoken at numerous national, international, and state conferences. He has received awards for excellence from several organizations in the field of education. Dr. Boudah has conducted many professional development, curriculum design, program evaluation, grant and foundation proposal development, and system change activities with public and private schools, as well as public and private agencies, to develop and support services to low-performing and at-risk students. He is a past president of the Council for Learning Disabilities. Dr. Boudah's continuing professional interests include programs and services for low-performing students and students with disabilities, learning and instructional strategies, dropout prevention, and systems change. Dr. Boudah can be reached at, on Facebook, or at

    • Loading...
Back to Top

Copy and paste the following HTML into your website