Entry
Reader's guide
Entries A-Z
Subject index
Guiding Principles for Evaluators
A mark of the maturity of a profession is the development of professional codes of conduct. Such codes describe the ethics and standards that the professionals should abide by in their work. The codes are usually developed by relevant professional associations.
The American Evaluation Association (AEA) is a professional association devoted to the study and practice of evaluation. AEA began in 1986 with the merger of two predecessor organizations, the Evaluation Network and the Evaluation Research Society. Over the ensuing years, the AEA board of directors and membership expressed increased interest in drafting a code of professional conduct. In 1992, the AEA board of directors formed a task force with the goal of drafting a code of conduct for the evaluation profession. The resulting document was the AEA Guiding Principles for Evaluators, adopted in 1994 and published in 1995.
Evaluators had considered matters related to professional conduct prior to the formation of the AEA task force in 1992. For example, the Joint Committee on Standards for Educational Evaluation published the Standards for Evaluations of Educational Programs, Projects, and Materials in 1981, and the Evaluation Research Society had its own ethical code, as well. However, AEA never formally adopted any of these codes of conduct, largely because the AEA board believed that AEA needed to develop its own code rather than adopt one from another group.
The AEA board specifically charged the task force with developing general guiding principles rather than standards. Principles are general and abstract guides to professional conduct, whereas standards are specific and detailed recommendations for how the principles should be operationalized in practice. Development of such standards specifically for AEA is a task that remains for the future. The hope was that, by virtue of their generality, the principles would apply to evaluators of all kinds in all the diverse settings in which they work.
The task force comprised volunteers from members of the board in 1992. The four task force members worked in a variety of evaluation job settings, including academics, private practice evaluation, and government administration. The task force reviewed pertinent literature and relevant codes from other professional organizations and developed an initial draft.
The task force then obtained feedback on the content of the draft from the AEA board of directors and membership through meetings, written correspondence, and several symposia at the 1993 AEA annual conference. The AEA board of directors accepted the final draft of the Guiding Principles in January 1994. A membership vote led to the adoption of the Guiding Principles later that year.
The Guiding Principles are intended to guide the behavior of evaluators proactively and to inform clients, stakeholders, and the public about what to expect from professional evaluation. The five AEA guiding principles for evaluators are as follows:
- Systematic inquiry. Evaluators conduct systematic, data-based inquiries about whatever is being evaluated.
- Competence. Evaluators provide competent performance to stakeholders.
- Integrity and honesty. Evaluators ensure the honesty and integrity of the entire evaluation process.
- Respect for people. Evaluators respect the security, dignity, and self-worth of the respondents, program participants, clients, and other stakeholders with whom they interact.
- Responsibilities for general and public welfare. Evaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare.
The principle of systematic inquiry includes adherence to the highest appropriate technical standards and effective communication with clients about the strengths and shortcomings of the evaluation questions and of the methods used for answering those questions. Competence refers to evaluators' responsibilities to possess the appropriate skills to undertake the tasks required to do the evaluation and to continue their professional development. The principle of integrity and honesty involves evaluators' responsibilities to be forthcoming with regard to costs, limitations of the methodology or results obtained, changes to project plans, conflicts of interest, and sources of financial support. In addition, this principle includes evaluators' responsibility to try to prevent the misuse of their work by others. Respect for people encompasses evaluators' responsibilities to abide by other relevant professional ethics and standards, to maximize benefits and minimize risks to participants and clients, and to respect individual, group, and cultural differences among participants. The section on responsibilities for general and public welfare stresses that evaluators take into account the implications of their work with regard to clients, stakeholders, and the public good, that evaluators foster free exchange of information with stakeholders, and that evaluators maintain balance between client needs and other needs regarding the general and public welfare.
...
- Concepts, Evaluation
- Personnel Evaluation
- Advocacy in Evaluation
- Evaluand
- Evaluation
- Evaluator
- Evaluator Roles
- External Evaluation
- Formative Evaluation
- Goal
- Grading
- Independence
- Internal Evaluation
- Judgment
- Logic of Evaluation
- Merit
- Metaevaluation
- Objectives
- Personnel Evaluation
- Process Evaluation
- Product Evaluation
- Program Evaluation
- Quality
- Ranking
- Standard Setting
- Standards
- Summative Evaluation
- Synthesis
- Value Judgment
- Values
- Worth
- Concepts, Methodological
- 360-Degree Evaluation
- Accountability
- Achievement
- Affect
- Analysis
- Applied Research
- Appraisal
- Appropriateness
- Assessment
- Audience
- Best Practices
- Black Box
- Capacity Building
- Client
- Client Satisfaction
- Consumer
- Consumer Satisfaction
- Control Conditions
- Cost
- Cost Effectiveness
- Criterion-Referenced Test
- Critique
- Cut Score
- Description
- Design Effects
- Dissemination
- Effectiveness
- Efficiency
- Feasibility
- Hypothesis
- Impact Assessment
- Implementation
- Improvement
- Indicators
- Inputs
- Inspection
- Interpretation
- Intervention
- Interviewing
- Literature Review
- Longitudinal Studies
- Measurement
- Modus Operandi
- Most Significant Change Technique
- Norm-Referenced Tests
- Opportunity Costs
- Outcomes
- Outputs
- Peer Review
- Performance Indicator
- Performance Program
- Personalizing Evaluation
- Rapport
- Reactivity
- Reliability
- Sampling
- Score Card
- Secondary Analysis
- Services
- Setting
- Significance
- Situational Responsiveness
- Social Indicators
- Sponsor
- Stakeholder Involvement
- Treatments
- Triangulation
- Concepts, Philosophical
- Verstehen
- Aesthetics
- Ambiguity
- Amelioration
- Argument
- Authenticity
- Authority of Evaluation
- Bias
- Conclusions, Evaluative
- Consequential Validity
- Construct Validity
- Context
- Credibility
- Criteria
- Difference Principle
- Empiricism
- Epistemology
- Equity
- External Validity
- Falsifiability
- Generalization
- Hermeneutics
- Inference
- Internal Validity
- Interpretation
- Interpretivism
- Logical Positivism
- Meaning
- Means-End Relations
- Moral Discourse
- Objectivity
- Ontology
- Paradigm
- Pareto Optimal
- Pareto Principle
- Phenomenology
- Point of View
- Positivism
- Postmodernism
- Postpositivism
- Praxis
- Probative Logic
- Proxy Measure
- Rationality
- Relativism
- Subjectivity
- Tacit Knowledge
- Trustworthiness
- Understanding
- Validity
- Value-Free Inquiry
- Values
- Veracity
- Concepts, Social Science
- Capitalism
- Chaos Theory
- Constructivism
- Critical Incidents
- Deconstruction
- Dialogue
- Disenfranchised
- Experimenting Society
- Feminism
- Great Society Programs
- Ideal Type
- Inclusion
- Lesbian, Gay, Bisexual, and Transgender Issues in Evaluation
- Minority Issues in Evaluation
- Persuasion
- Policy Studies
- Politics of Evaluation
- Qualitative-Quantitative Debate in Evaluation
- Social Class
- Social Context
- Social Justice
- Ethics and Standards
- The Program Evaluation Standards
- Certification
- Communities of Practice (CoPs)
- Confidentiality
- Conflict of Interest
- Ethical Agreements
- Ethics
- Guiding Principles for Evaluators
- Honesty
- Human Subjects Protection
- Impartiality
- Informed Consent
- Licensure
- Profession of Evaluation
- Propriety
- Public Welfare
- Reciprocity
- Social Justice
- Teaching Evaluation
- Evaluation Approaches and Models
- Accreditation
- Action Research
- Appreciative Inquiry
- Artistic Evaluation
- Auditing
- CIPP Model (Concept, Input, Process, Product)
- Cluster Evaluation
- Community-Based Evaluation
- Connoisseurship
- Cost-Benefit Analysis
- Countenance Model of Evaluation
- Critical Theory Evaluation
- Culturally Responsive Evaluation
- Deliberative Democratic Evaluation
- Democratic Evaluation
- Developmental Evaluation
- Empowerment Evaluation
- Evaluative Inquiry
- Experimental Design
- Feminist Evaluation
- Fourth-Generation Evaluation
- Goal-Free Evaluation
- Illuminative Evaluation
- Inclusive Evaluation
- Institutional Self-Evaluation
- Judicial Model of Evaluation
- Kirkpatrick Four-Level Evaluation Model
- Logic Model
- Models of Evaluation
- Multicultural Evaluation
- Naturalistic Evaluation
- Objectives-Based Evaluation
- Participatory Action Research (PAR)
- Participatory Evaluation
- Participatory Monitoring and Evaluation
- Quasiexperimental Design
- Realist Evaluation
- Realistic Evaluation
- Responsive Evaluation
- Success Case Method
- Transformative Paradigm
- Utilization-Focused Evaluation
- Evaluation Practice Around the World, Stories
- Evaluation Planning
- Evaluation Theory
- Laws and Legislation
- Organizations
- Abt Associates
- Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP)
- American Evaluation Association (AEA)
- American Institutes for Research (AIR)
- Buros Institute
- Center for Instructional Research and Curriculum Evaluation (CIRCE)
- Center for Research on Evaluation, Standards, and Student Testing (CRESST)
- Center for the Study of Evaluation (CSE)
- Centers for Disease Control and Prevention (CDC)
- Centre for Applied Research in Education (CARE)
- ERIC Clearinghouse on Assessment and Evaluation
- Evaluation Center, The
- Evaluation Research Society (ERS)
- Evaluators' Institute™, The
- General Accounting Office (GAO)
- International Development Evaluation Association (IDEAS)
- International Development Research Center (IDRC)
- International Organization for Cooperation in Evaluation (IOCE)
- International Program in Development Evaluation Training (IPDET)
- Joint Committee on Standards for Educational Evaluation
- Mathematica Policy Research
- MDRC
- National Assessment of Educational Progress (NAEP)
- National Institutes of Health (NIH)
- National Science Foundation (NSF)
- Organisation for Economic Co-operation and Development (OECD)
- Performance Assessment Resource Centre (PARC)
- Philanthropic Evaluation
- RAND Corporation
- Research Triangle Institute (RTI)
- United States Agency of International Development (USAID)
- Urban Institute
- Westat
- WestEd
- World Bank
- World Conservation Union (IUCN)
- People
- Abma, Tineke A.
- Adelman, Clem
- Albæk, Erik
- Alkin, Marvin C.
- Altschuld, James W.
- Bamberger, Michael J.
- Barrington, Gail V.
- Bhola, H. S.
- Bickel, William E.
- Bickman, Leonard
- Bonnet, Deborah G.
- Boruch, Robert
- Brisolara, Sharon
- Campbell, Donald T.
- Campos, Jennie
- Chalmers, Thomas
- Chelimsky, Eleanor
- Chen, Huey-Tsyh
- Conner, Ross
- Cook, Thomas D.
- Cooksy, Leslie
- Cordray, David
- Cousins, J. Bradley
- Cronbach, Lee J.
- Dahler-Larsen, Peter
- Datta, Lois-ellin
- Denny, Terry
- Eisner, Elliot
- Engle, Molly
- Farrington, David
- Fetterman, David M.
- Fitzpatrick, Jody L.
- Forss, Kim
- Fournier, Deborah M.
- Freeman, Howard E.
- Frierson, Henry T.
- Funnell, Sue
- Georghiou, Luke
- Glass, Gene V
- Grasso, Patrick G.
- Greene, Jennifer C.
- Guba, Egon G.
- Hall, Budd L.
- Hastings, J. Thomas
- Haug, Peder
- Henry, Gary T.
- Hood, Stafford L.
- Hopson, Rodney
- House, Ernest R.
- Hughes, Gerunda B.
- Ingle, Robert
- Jackson, Edward T.
- Julnes, George
- King, Jean A.
- Kirkhart, Karen
- Konrad, Ellen L.
- Kushner, Saville
- Leeuw, Frans L.
- Levin, Henry M.
- Leviton, Laura
- Light, Richard J.
- Lincoln, Yvonna S.
- Lipsey, Mark W.
- Lundgren, Ulf P.
- Mabry, Linda
- MacDonald, Barry
- Madison, Anna Marie
- Mark, Melvin M.
- Mathison, Sandra
- Mertens, Donna M.
- Millet, Ricardo A.
- Moos, Rudolf H.
- Morell, Jonathan A.
- Morris, Michael
- Mosteller, Frederick
- Narayan, Deepa
- Nathan, Richard
- Nevo, David
- Newcomer, Kathryn
- Newman, Dianna L.
- O'Sullivan, Rita
- Owen, John M.
- Patel, Mahesh
- Patton, Michael Quinn
- Pawson, Ray
- Pollitt, Christopher
- Porteous, Nancy L.
- Posavac, Emil J.
- Preskill, Hallie
- Reichardt, Charles S. (Chip)
- Rist, Ray C.
- Rog, Debra J.
- Rogers, Patricia J.
- Rossi, Peter H.
- Rugh, Jim
- Russon, Craig W.
- Ryan, Katherine E.
- Sanders, James R.
- Scheirer, Mary Ann
- Schwandt, Thomas A.
- Scriven, Michael
- Shadish, William R.
- Shulha, Lyn M.
- Simons, Helen
- Smith, M. F.
- Smith, Nick L.
- Stake, Robert E.
- Stanfield, John II
- Stanley, Julian C.
- Stufflebeam, Daniel L.
- Tilley, Nick
- Torres, Rosalie T.
- Toulemonde, Jacques
- Trochim, William
- Tyler, Ralph W.
- VanderPlaat, Madine
- Wadsworth, Yoland
- Walberg, Herbert J.
- Walker, Rob
- Weiss, Carol Hirschon
- Whitmore, Elizabeth
- Wholey, Joseph S.
- Wildavsky, Aaron B.
- Worthen, Blaine R.
- Wye, Christopher G.
- Publications
- American Journal of Evaluation
- Evaluation & the Health Professions
- Evaluation and Program Planning
- Evaluation Review: A Journal of Applied Social Research
- Evaluation: The International Journal of Theory, Research and Practice
- New Directions for Evaluation (NDE)
- Practical Assessment, Research on Evaluation (PARE)
- The Personnel Evaluation Standards
- The Program Evaluation Standards
- EvalTalk
- Guiding Principles for Evaluators
- Qualitative Methods
- Archives
- Checklists
- Comparative Analysis
- Constant Comparative Method
- Content Analysis
- Cross-Case Analysis
- Deliberative Forums
- Delphi Technique
- Document Analysis
- Emergent Design
- Emic Perspective
- Ethnography
- Etic Perspective
- Fieldwork
- Focus Group
- Gendered Evaluation
- Grounded Theory
- Group Interview
- Key Informants
- Mixed Methods
- Narrative Analysis
- Natural Experiments
- Negative Cases
- Observation
- Participant Observation
- Phenomenography
- Portfolio
- Portrayal
- Qualitative Data
- Rapid Rural Appraisal
- Reflexivity
- Rival Interpretations
- Thick Description
- Think-Aloud Protocol
- Unique-Case Analysis
- Unobtrusive Measures
- Quantitative Methods
- Aggregate Matching
- Backward Mapping
- Benchmarking
- Concept Mapping
- Correlation
- Cross-Sectional Design
- Errors of Measurement
- Fault Tree Analysis
- Field Experiment
- Matrix Sampling
- Meta-analysis
- Multitrait-Multimethod Analysis
- Panel Studies
- Pre-Post Design
- Quantitative Data
- Quantitative Weight and Sum
- Regression Analysis
- Standardized Test
- Statistics
- Surveys
- Time Series Analysis
- Representation, Reporting, Communicating
- Systems
- Technology
- Utilization
- Loading...
Get a 30 day FREE TRIAL
-
Watch videos from a variety of sources bringing classroom topics to life
-
Read modern, diverse business cases
-
Explore hundreds of books and reference titles
Sage Recommends
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches