Entry
Reader's guide
Entries A-Z
Subject index
Ethical Agreements
One difference between evaluators and researchers is that evaluators are hired by clients. Many clients are new to evaluation or may have inappropriate views of the evaluator's role and the nature of evaluation work. For this reason and many others, the evaluator-client relationship is fraught with potential ethical conflicts.
The Program Evaluation Standards and the Guiding Principles for Evaluators both speak to agreements between clients and evaluators. The standards give particular attention to formal agreements. In practice, however, evaluation contracts often focus on the evaluation plan and budget. Written agreements specifying the roles and responsibilities of the evaluator, the client, and other stakeholders can be useful in clarifying expectations and avoiding some, though certainly not all, future ethical dilemmas.
Ethical agreements may address the responsibilities of the evaluator; the roles of clients, program staff and participants, and other stakeholders; methodological issues; and interpretation and use of results. Many evaluators choose to share the Guiding Principles or Program Evaluation Standards with clients to educate them about evaluators' ethical obligations. Although the specific content of ethical agreements will differ with the nature of the evaluation, a few common areas will be discussed here.
Evaluators are ethically bound to include different stakeholders and to consider the public interest. Clients new to evaluation may consider themselves to be the sole audience for the study and may be unaware of evaluators' broader obligations. Although evaluators' actions in regard to the public good may be difficult to anticipate, the role of the client and other stakeholder groups at each stage of the evaluation can be delineated as a first step in raising the client's awareness of the evaluator's active role in involving other groups.
The collection and management of data have important ethical implications. The evaluator and client must agree on the use of informed consent and confidentiality and the means for maintaining promised confidentiality. Who will manage, store, and keep the data? Under what circumstances, if any, will the client and other stakeholders be provided access to raw data? How will confidentiality of these data be maintained?
The interpretation and reporting of results is another thorny area. In many cases, evaluators seek input from clients and other stakeholders on draft reports, but the final report is the evaluator's domain. Evaluators have responsibilities for ensuring that results are disseminated in appropriate ways to different stakeholders. Who will write or present results to various audiences? What role will clients and stakeholders have in presenting and disseminating results? In making choices about the information presented to each audience?
Written agreements cannot incorporate all potential ethical conflicts because al possibilities cannot be foreseen at the time the agreement is written. Contractualism, or blindly following the contract, can also lead to ethical mistakes. Nevertheless, the evaluator has a responsibility to educate clients about ethical issues that arise in evaluation and about the evaluator's ethical obligations. Ethical agreements provide the means for opening this discussion and clarifying responsibilities so that the rights of the client, program participants, other stakeholders, and the public, as well as the integrity of the evaluation, are preserved.
...
- Concepts, Evaluation
- Personnel Evaluation
- Advocacy in Evaluation
- Evaluand
- Evaluation
- Evaluator
- Evaluator Roles
- External Evaluation
- Formative Evaluation
- Goal
- Grading
- Independence
- Internal Evaluation
- Judgment
- Logic of Evaluation
- Merit
- Metaevaluation
- Objectives
- Personnel Evaluation
- Process Evaluation
- Product Evaluation
- Program Evaluation
- Quality
- Ranking
- Standard Setting
- Standards
- Summative Evaluation
- Synthesis
- Value Judgment
- Values
- Worth
- Concepts, Methodological
- 360-Degree Evaluation
- Accountability
- Achievement
- Affect
- Analysis
- Applied Research
- Appraisal
- Appropriateness
- Assessment
- Audience
- Best Practices
- Black Box
- Capacity Building
- Client
- Client Satisfaction
- Consumer
- Consumer Satisfaction
- Control Conditions
- Cost
- Cost Effectiveness
- Criterion-Referenced Test
- Critique
- Cut Score
- Description
- Design Effects
- Dissemination
- Effectiveness
- Efficiency
- Feasibility
- Hypothesis
- Impact Assessment
- Implementation
- Improvement
- Indicators
- Inputs
- Inspection
- Interpretation
- Intervention
- Interviewing
- Literature Review
- Longitudinal Studies
- Measurement
- Modus Operandi
- Most Significant Change Technique
- Norm-Referenced Tests
- Opportunity Costs
- Outcomes
- Outputs
- Peer Review
- Performance Indicator
- Performance Program
- Personalizing Evaluation
- Rapport
- Reactivity
- Reliability
- Sampling
- Score Card
- Secondary Analysis
- Services
- Setting
- Significance
- Situational Responsiveness
- Social Indicators
- Sponsor
- Stakeholder Involvement
- Treatments
- Triangulation
- Concepts, Philosophical
- Verstehen
- Aesthetics
- Ambiguity
- Amelioration
- Argument
- Authenticity
- Authority of Evaluation
- Bias
- Conclusions, Evaluative
- Consequential Validity
- Construct Validity
- Context
- Credibility
- Criteria
- Difference Principle
- Empiricism
- Epistemology
- Equity
- External Validity
- Falsifiability
- Generalization
- Hermeneutics
- Inference
- Internal Validity
- Interpretation
- Interpretivism
- Logical Positivism
- Meaning
- Means-End Relations
- Moral Discourse
- Objectivity
- Ontology
- Paradigm
- Pareto Optimal
- Pareto Principle
- Phenomenology
- Point of View
- Positivism
- Postmodernism
- Postpositivism
- Praxis
- Probative Logic
- Proxy Measure
- Rationality
- Relativism
- Subjectivity
- Tacit Knowledge
- Trustworthiness
- Understanding
- Validity
- Value-Free Inquiry
- Values
- Veracity
- Concepts, Social Science
- Capitalism
- Chaos Theory
- Constructivism
- Critical Incidents
- Deconstruction
- Dialogue
- Disenfranchised
- Experimenting Society
- Feminism
- Great Society Programs
- Ideal Type
- Inclusion
- Lesbian, Gay, Bisexual, and Transgender Issues in Evaluation
- Minority Issues in Evaluation
- Persuasion
- Policy Studies
- Politics of Evaluation
- Qualitative-Quantitative Debate in Evaluation
- Social Class
- Social Context
- Social Justice
- Ethics and Standards
- The Program Evaluation Standards
- Certification
- Communities of Practice (CoPs)
- Confidentiality
- Conflict of Interest
- Ethical Agreements
- Ethics
- Guiding Principles for Evaluators
- Honesty
- Human Subjects Protection
- Impartiality
- Informed Consent
- Licensure
- Profession of Evaluation
- Propriety
- Public Welfare
- Reciprocity
- Social Justice
- Teaching Evaluation
- Evaluation Approaches and Models
- Accreditation
- Action Research
- Appreciative Inquiry
- Artistic Evaluation
- Auditing
- CIPP Model (Concept, Input, Process, Product)
- Cluster Evaluation
- Community-Based Evaluation
- Connoisseurship
- Cost-Benefit Analysis
- Countenance Model of Evaluation
- Critical Theory Evaluation
- Culturally Responsive Evaluation
- Deliberative Democratic Evaluation
- Democratic Evaluation
- Developmental Evaluation
- Empowerment Evaluation
- Evaluative Inquiry
- Experimental Design
- Feminist Evaluation
- Fourth-Generation Evaluation
- Goal-Free Evaluation
- Illuminative Evaluation
- Inclusive Evaluation
- Institutional Self-Evaluation
- Judicial Model of Evaluation
- Kirkpatrick Four-Level Evaluation Model
- Logic Model
- Models of Evaluation
- Multicultural Evaluation
- Naturalistic Evaluation
- Objectives-Based Evaluation
- Participatory Action Research (PAR)
- Participatory Evaluation
- Participatory Monitoring and Evaluation
- Quasiexperimental Design
- Realist Evaluation
- Realistic Evaluation
- Responsive Evaluation
- Success Case Method
- Transformative Paradigm
- Utilization-Focused Evaluation
- Evaluation Practice Around the World, Stories
- Evaluation Planning
- Evaluation Theory
- Laws and Legislation
- Organizations
- Abt Associates
- Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP)
- American Evaluation Association (AEA)
- American Institutes for Research (AIR)
- Buros Institute
- Center for Instructional Research and Curriculum Evaluation (CIRCE)
- Center for Research on Evaluation, Standards, and Student Testing (CRESST)
- Center for the Study of Evaluation (CSE)
- Centers for Disease Control and Prevention (CDC)
- Centre for Applied Research in Education (CARE)
- ERIC Clearinghouse on Assessment and Evaluation
- Evaluation Center, The
- Evaluation Research Society (ERS)
- Evaluators' Institute™, The
- General Accounting Office (GAO)
- International Development Evaluation Association (IDEAS)
- International Development Research Center (IDRC)
- International Organization for Cooperation in Evaluation (IOCE)
- International Program in Development Evaluation Training (IPDET)
- Joint Committee on Standards for Educational Evaluation
- Mathematica Policy Research
- MDRC
- National Assessment of Educational Progress (NAEP)
- National Institutes of Health (NIH)
- National Science Foundation (NSF)
- Organisation for Economic Co-operation and Development (OECD)
- Performance Assessment Resource Centre (PARC)
- Philanthropic Evaluation
- RAND Corporation
- Research Triangle Institute (RTI)
- United States Agency of International Development (USAID)
- Urban Institute
- Westat
- WestEd
- World Bank
- World Conservation Union (IUCN)
- People
- Abma, Tineke A.
- Adelman, Clem
- Albæk, Erik
- Alkin, Marvin C.
- Altschuld, James W.
- Bamberger, Michael J.
- Barrington, Gail V.
- Bhola, H. S.
- Bickel, William E.
- Bickman, Leonard
- Bonnet, Deborah G.
- Boruch, Robert
- Brisolara, Sharon
- Campbell, Donald T.
- Campos, Jennie
- Chalmers, Thomas
- Chelimsky, Eleanor
- Chen, Huey-Tsyh
- Conner, Ross
- Cook, Thomas D.
- Cooksy, Leslie
- Cordray, David
- Cousins, J. Bradley
- Cronbach, Lee J.
- Dahler-Larsen, Peter
- Datta, Lois-ellin
- Denny, Terry
- Eisner, Elliot
- Engle, Molly
- Farrington, David
- Fetterman, David M.
- Fitzpatrick, Jody L.
- Forss, Kim
- Fournier, Deborah M.
- Freeman, Howard E.
- Frierson, Henry T.
- Funnell, Sue
- Georghiou, Luke
- Glass, Gene V
- Grasso, Patrick G.
- Greene, Jennifer C.
- Guba, Egon G.
- Hall, Budd L.
- Hastings, J. Thomas
- Haug, Peder
- Henry, Gary T.
- Hood, Stafford L.
- Hopson, Rodney
- House, Ernest R.
- Hughes, Gerunda B.
- Ingle, Robert
- Jackson, Edward T.
- Julnes, George
- King, Jean A.
- Kirkhart, Karen
- Konrad, Ellen L.
- Kushner, Saville
- Leeuw, Frans L.
- Levin, Henry M.
- Leviton, Laura
- Light, Richard J.
- Lincoln, Yvonna S.
- Lipsey, Mark W.
- Lundgren, Ulf P.
- Mabry, Linda
- MacDonald, Barry
- Madison, Anna Marie
- Mark, Melvin M.
- Mathison, Sandra
- Mertens, Donna M.
- Millet, Ricardo A.
- Moos, Rudolf H.
- Morell, Jonathan A.
- Morris, Michael
- Mosteller, Frederick
- Narayan, Deepa
- Nathan, Richard
- Nevo, David
- Newcomer, Kathryn
- Newman, Dianna L.
- O'Sullivan, Rita
- Owen, John M.
- Patel, Mahesh
- Patton, Michael Quinn
- Pawson, Ray
- Pollitt, Christopher
- Porteous, Nancy L.
- Posavac, Emil J.
- Preskill, Hallie
- Reichardt, Charles S. (Chip)
- Rist, Ray C.
- Rog, Debra J.
- Rogers, Patricia J.
- Rossi, Peter H.
- Rugh, Jim
- Russon, Craig W.
- Ryan, Katherine E.
- Sanders, James R.
- Scheirer, Mary Ann
- Schwandt, Thomas A.
- Scriven, Michael
- Shadish, William R.
- Shulha, Lyn M.
- Simons, Helen
- Smith, M. F.
- Smith, Nick L.
- Stake, Robert E.
- Stanfield, John II
- Stanley, Julian C.
- Stufflebeam, Daniel L.
- Tilley, Nick
- Torres, Rosalie T.
- Toulemonde, Jacques
- Trochim, William
- Tyler, Ralph W.
- VanderPlaat, Madine
- Wadsworth, Yoland
- Walberg, Herbert J.
- Walker, Rob
- Weiss, Carol Hirschon
- Whitmore, Elizabeth
- Wholey, Joseph S.
- Wildavsky, Aaron B.
- Worthen, Blaine R.
- Wye, Christopher G.
- Publications
- American Journal of Evaluation
- Evaluation & the Health Professions
- Evaluation and Program Planning
- Evaluation Review: A Journal of Applied Social Research
- Evaluation: The International Journal of Theory, Research and Practice
- New Directions for Evaluation (NDE)
- Practical Assessment, Research on Evaluation (PARE)
- The Personnel Evaluation Standards
- The Program Evaluation Standards
- EvalTalk
- Guiding Principles for Evaluators
- Qualitative Methods
- Archives
- Checklists
- Comparative Analysis
- Constant Comparative Method
- Content Analysis
- Cross-Case Analysis
- Deliberative Forums
- Delphi Technique
- Document Analysis
- Emergent Design
- Emic Perspective
- Ethnography
- Etic Perspective
- Fieldwork
- Focus Group
- Gendered Evaluation
- Grounded Theory
- Group Interview
- Key Informants
- Mixed Methods
- Narrative Analysis
- Natural Experiments
- Negative Cases
- Observation
- Participant Observation
- Phenomenography
- Portfolio
- Portrayal
- Qualitative Data
- Rapid Rural Appraisal
- Reflexivity
- Rival Interpretations
- Thick Description
- Think-Aloud Protocol
- Unique-Case Analysis
- Unobtrusive Measures
- Quantitative Methods
- Aggregate Matching
- Backward Mapping
- Benchmarking
- Concept Mapping
- Correlation
- Cross-Sectional Design
- Errors of Measurement
- Fault Tree Analysis
- Field Experiment
- Matrix Sampling
- Meta-analysis
- Multitrait-Multimethod Analysis
- Panel Studies
- Pre-Post Design
- Quantitative Data
- Quantitative Weight and Sum
- Regression Analysis
- Standardized Test
- Statistics
- Surveys
- Time Series Analysis
- Representation, Reporting, Communicating
- Systems
- Technology
- Utilization
- Loading...
Get a 30 day FREE TRIAL
-
Watch videos from a variety of sources bringing classroom topics to life
-
Read modern, diverse business cases
-
Explore hundreds of books and reference titles
Sage Recommends
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches