Practical Program Evaluations: Getting from Ideas to Outcomes

Books

Gerald Andrews Emison

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • Copyright

    View Copyright Page

    Dedication

    To Robert and Carol Emison and Grace and Beau, whose memories guide me still

    Tables, Figures, and Boxes

    Preface

    Program evaluation is an important way to advance the public interest. It opens windows to improving the performance of public organizations. Performance in the public sector has always been a major concern, and the past decade has seen an increasing emphasis on it. Whether termed reinventing government, new public management, or results-based management, this new emphasis on results reflects the fact that achievement requires reflection, and program evaluation is institutionalized reflection. It enables the intellectual underbrush to be cleared away and performance improvements to be identified. This identification, however, is not enough for improvements to be realized; implementation is also necessary. This book concerns the practices that heighten the likelihood that a program evaluation will lead to implemented recommendations and subsequent improvement.

    As a career member of the federal senior executive service for more than twenty years, I saw that two components typically made up successful program improvements. The first was rigorous, unbiased, and thorough analysis. The second was a series of practices that enabled decision makers to translate the analysis into change. The first component is the focus of most program evaluation texts and courses, whereas the second usually is left to “education by osmosis” when an evaluator begins work in the “real world.” The focus on the former at the expense of the latter is easy to understand. Learning the tools and methodology of program evaluation is not easy, so teaching competency in these skills is crucial. Seemingly, the latter is really just good common sense and can be more easily taught on the fly. But what seems so obvious is often not so for students first entering the workforce.

    It is not necessary to leave a critical aspect of successful evaluation to happenstance. This book identifies those practices that savvy evaluators follow so that their evaluations get implemented. It adds another dimension to the preparation, reflection, and practice that compose the essentials of program evaluation—a handy way to offer concrete advice and reinforce the practical.

    The foundations of this book are my own experiences in the public sector and in the classroom. I initially conducted program evaluations as an analyst for the U.S. Environmental Protection Agency (EPA). As a manager, and later, as the director of the program evaluation division, I saw the effective, the ineffective, and the neglected as this group dealt with highly controversial issues. During this period I had many conversations with colleagues about what composed a truly worthwhile evaluation. Almost every practitioner spelled out successful change as the measure of a combination of rigor and practical action.

    My work on evaluations led to my crossing over from evaluator to director of a large EPA regulatory program. As the director of air quality planning and standards, I found myself a customer of program evaluations and policy analyses. In this role I was able to observe, during my interactions with political executives and senior career appointee colleagues, what worked and what did not. This experience validated my belief that a combination of rigor with the practical is essential. When I moved to a regional office to become its senior career executive, my observations were reconfirmed in yet another venue. A good program evaluation needs conceptual rigor and practical application in order to be implemented.

    Shortly after I left the regional office, I found myself teaching policy analysis in a university setting. As a practitioner I often had wondered why the practical skills essential for successful evaluations were so randomly distributed among newly graduated evaluators. I soon realized it was because most academic training in program evaluation emphasized conceptual preparation without much stress on practical pathways to success. When I retired from the senior executive service and became a full-time academic, I could not find a satisfactory text that exposed my students to this complementary aspect. So I wrote this book.

    For those teaching introductory program evaluation courses, this book supplements the many fine core texts available. It introduces the practices essential to effectiveness in applied settings. It supplements, rather than replaces, the conceptual emphasis that is the staple of traditional program evaluation courses. The intention is to round out graduate students’ education and preparation. Its most useful place is early in a graduate program evaluation course, when students can employ this guidance on the content of the course throughout a semester. The book also can serve as an accessible reference to remind practicing evaluators in the rush of day-to-day work what is important for effectiveness.

    The book is organized to promote accessibility. Chapter 1 explains the reasoning behind the text and its relevance to today's program evaluator. Chapter 2 places the book in the terrain of the overall enterprise of program evaluation. The text's core lies in the next four chapters. Each examines a key attribute of successful practical program evaluations. The 4Cs—client, content, control, and communication—are used to bundle the essential practices and to examine a series of related practices in a framework that students can return to easily.

    Since this book is practice based, it is impossible to thank adequately everyone who played a part; it is my exposure to many dedicated public officials that enabled me to write it. Nevertheless, there are a number of people who contributed mightily to my ideas. Ron Brand, first as my boss, then as my mentor and friend, contributed extensively to most of the ideas found within. Stan Meiburg, as a staff assistant and then a colleague, has never failed to shed new insight upon public evaluation. John Thillmann, David Ziegele, and Tom Kelly were always able to bring me back to earth and remind me that in the long run if a practice does not improve program performance, it is not worthy of extensive effort. And the staffs of the EPA's Office of Air Quality Planning and Standards in Washington, D.C.; Research Triangle Park, North Carolina; and the Seattle, Washington, regional office consistently demonstrated that long-run improvement of the public's interest was why we were in the game.

    My colleagues at Mississippi State University deserve special thanks, since they provided both the models and the encouragement to pursue this project. Similarly, colleagues at Duke University gave me the opportunity to structure these ideas into a coherent package for the first time.

    This work had its start in a conversation with Michael Dunaway of CQ Press. I am indebted to him for recognizing that there might be a book somewhere in my ramblings. Charisse Kiino has served as my guide and encourager at CQ Press, and this book simply would not exist without her counsel and encouragement. I am quite grateful for her persistence and patience. The production of the book was aided immeasurably by the thoughtful review of Abigail Harrison Emison. I also would like to thank the reviewers, whose supportive yet unvarnished critiques led to improvements that would not have occurred without their input: Daniel Baracskay, Valdosta State University; Steve Daniels, California State University at Bakersfield; Heidi Koenig, Northern Illinois University; Laura Langbein, American University; Tom Liou, University of Central Florida; Peter Mameli, City University of New York–John Jay College of Criminal Justice; Elizabethann O'Sullivan, North Carolina State University; and Dan Williams, City University of New York–Baruch College.

    Last, I must thank the one person who has been a model of good humor and support, not only in the writing of this book but also in the experiences upon which this book draws. My wife, Donna Kay Harrison, has been for more than thirty years the litmus test for ideas on the practicality, humanity, and wisdom of major choices I have made in my career. Without her, not only would this book be impossible, the career that underlies it would have been impossible as well. I owe her a debt that is simply unpayable. But it is balanced, I hope, by a bottomless well of gratitude.

    Such a wide range of contributors has allowed me to bring a number of experiences to this book. Nevertheless, while many helped, the work is mine alone, and I take responsibility for it while gratefully thanking everyone who helped directly or indirectly.

  • Endnotes

    Chapter One

    1. William James, “What Pragmatism Means,” in Pragmatism, ed. Louis Menand (New York: Random House, 1907).

    2. Peter Drucker, The Effective Executive (New York: Harper & Row, 1966).

    Chapter Two

    1. Peter H. Rossi, Howard E. Freeman, and Mark W. Lipsey, Evaluation: A Systematic Approach, 6th ed. (Thousand Oaks, Calif.: Sage, 1999), 93–111.

    2. Leonard Merewitz and Stephen H. Sosnick, The Budget's New Clothes: A Critique of Planning-Programming-Budgeting and Benefit-Cost Analysis (Chicago: Markham, 1971).

    3. Deborah Stone, Policy Paradox: The Art of Political Decision Making (New York: Norton, 2002).

    4. Henry Weinstein, “1st Suit in State to Attack ‘Intelligent Design’ Filed,” Los Angeles Times, January 11, 2006, http://www.latimes.com.

    5. Tim W. Clark, The Policy Process: A Practical Guide for Natural Resource Professionals (New Haven, Conn.: Yale University Press, 2002).

    6. Martin Meyerson and Edward C. Banfield, Politics, Planning, and the Public Interest (Toronto, Canada: Free Press, 1955).

    7. Kenneth Arrow, Social Choice and Individual Values, 2d ed. (New York: Wiley, 1963).

    8. Brian Barry, Political Argument (Berkeley, Calif.: University of California Press, 1990).

    9. Herbert Simon,“From Substantive to Procedural Rationality,” in Philosophy and Economic Theory, ed. Frank Hahn and Martin Hollis (New York: Oxford University Press, 1979), 65–86; and Thomas McCarthy, The Critical Theory of Jurgen Habermas (Cambridge, Mass.: MIT Press, 1981).

    10. Herbert Simon, Administrative Behavior, 4th ed. (New York: Free Press, 1997).

    11. Thomas Dewey, Liberalism and Social Action (New York: G. P. Putnam, 1935).

    12. Arnold Love, “Implementation Evaluation,” in Handbook of Practical Program Evaluation, 2nd ed., ed. Joseph Wholey, Harry P. Hatry, and Kathryn E. Newcomer (San Francisco: Jossey-Bass, 2004), 63–98.

    13. Rossi, Freeman, and Lipsey, Evaluation.

    14. James C. McDavid and Laura R. L. Hawthorn, Program Evaluation and Performance Measurement: An Introduction to Practice (Thousand Oaks, Calif.: Sage, 2006).

    15. Winston S. Churchill,“Address to Parliament,” November 11, 1947, http://www.enterstageright.com/archive/articles/0105/0105churchilldem.htm.

    16. David Braybrooke and Charles E. Lindblom, A Strategy of Decision (New York: Free Press, 1963); and Simon, “From Substantive to Procedural Rationality.”

    17. Simon, “From Substantive to Procedural Rationality;” Robert A. Dahl, A Preface to Democracy (Chicago: University of Chicago Press, 1956); and Braybrooke and Lindblom, A Strategy of Decision.

    18. Simon, “From Substantive to Procedural Rationality.”

    19. Braybrooke and Lindblom, A Strategy of Decision.

    20. Rossi, Freeman, and Lipsey, Evaluation.

    21. Richard D. Bingham and Claire L. Felbinger, Evaluation in Practice: A Methodological Approach, 2nd ed. (New York: Chatham, 2002).

    22. Jody L. Fitzpatrick, James R. Sanders, and Blaine R. Worthen, Program Evaluation: Alternative Approaches and Practical Guidelines, 3rd ed. (Boston: Pearson, 2004).

    23. Richard Berk and Peter H. Rossi, Thinking about Program Evaluation (Newbury Park, Calif.: Sage, 1990).

    24. Earl Babbie, The Practice of Social Research (Belmont, Calif.: Wadsworth, 1992); and Martin Bulmer, The Uses of Social Research: Social Investigation in Public Policy-Making (London: George Allen and Unwin, 1982).

    25. William Trochim, Research Design for Program Evaluation (Beverly Hills, Calif.: Sage, 1984).

    26. Fitzpatrick, Sanders, and Worthen, Program Evaluation.

    27. John A. McLaughlin and Gretchen B. Jordan, “Using Logic Models,” in Handbook of Practical Program Evaluation, 2nd ed., ed. Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer (San Francisco: Jossey-Bass, 2004), 7–32.

    28. Rossi, Freeman, and Lipsey, Evaluation.

    29. Hubert M. Blalock, An Introduction to Social Research (Englewood Cliffs, N.J.: Prentice-Hall, 1970); and William L. Hays, Statistics (Ft. Worth, Texas: Harcourt Brace, 1994).

    30. Rossi, Freeman, and Lipsey, Evaluation; and Love, “Implementation Evaluation.”

    31. Donald Campbell and Julian Stanley, Experimental and Quasi-Experimental Designs for Research (Chicago: Rand McNally, 1966); and Thomas Cook and Donald Campbell, Quasi-Experimentation: Design and Analysis Issues for Field Settings (Chicago: Rand McNally, 1979).

    32. Charles S. Reichardt and Melvin M. Mark, “Quasi-Experimentation,” in Handbook of Practical Program Evaluation, 2nd ed., ed. Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer (San Francisco: Jossey-Bass, 2004), 126–149.

    33. Blalock, An Introduction to Social Research; and William H. Greene, Econometric Analysis, 4th ed. (Upper Saddle River, N.J.: Prentice Hall, 2000).

    34. Sharon L. Caudle, “Qualitative Data Analysis,” in Handbook of Practical Program Evaluation, 2nd ed., ed. Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer (San Francisco: Jossey-Bass, 2004), 417–438.

    35. Anselm Strauss and Juliet Corbin, Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, 2nd ed. (Thousand Oaks, Calif.: Sage, 1998); Matthew B. Miles. and A. Michael Huberman, Qualitative Data Analysis: A Sourcebook of New Methods (Newbury Park, Calif.: Sage, 1984); and Joseph Maxwell, Qualitative Research Design (Thousand Oaks, Calif.: Sage, 1996).

    36. Robert K. Yin, Case Study Research: Design and Methods, 2nd ed. (Newbury Park, Calif.: Sage, 1994).

    37. Joe R. Feagin, Anthony M. Orum, and Gideon Sjoberg, eds., A Case for the Case Study (Chapel Hill, N.C.: University of North Carolina Press, 1991).

    38. Fitzpatrick, Sanders, and Worthen, Program Evaluation; and Melvin M. Mark and R. Lance Shotland, Multiple Methods in Program Evaluation (San Francisco: Jossey-Bass, 1987).

    39. Caudle, “Qualitative Data Analysis;” and Michael Patton, How to Use Qualitative Methods in Evaluation (Newbury Park, Calif.: Sage, 1987).

    40. Caudle, “Qualitative Data Analysis.”

    41. Patton, How to Use Qualitative Methods in Evaluation.

    42. John Van Maanen, “Introduction,” in Varieties of Qualitative Research, ed. John Van Maanen, James M. Dabbs Jr., and Robert R. Faulkner (Newbury Park, Calif.: Sage, 1982), 11–30.

    43. Caudle, “Qualitative Data Analysis.”

    44. Caudle, “Qualitative Data Analysis.”

    45. Yin, Case Study Research.

    46. Caudle, “Qualitative Data Analysis.”

    47. Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer, eds., Handbook of Practical Program Evaluation (San Francisco: Jossey-Bass, 1994); and Joseph Wholey, Harry P. Hatry and Kathryn E. Newcomer, eds., Handbook of Practical Evaluation, 2d ed. (San Francisco: Jossey-Bass, 2004).

    48. Yvonna Lincoln and Egon Guba, Naturalistic Inquiry (Newbury Park, Calif.: Sage, 1985); Miles and Huberman, Qualitative Data Analysis; and Babbie, The Practice of Social Research.

    49. George Geis, “Formative Evaluation: Developmental Testing and Expert Review,” Performance and Instruction (1987), 26; Harvey Averch, “Megaproject Selection: Criteria and Rules for Evaluating Competing R&D Megaprojects,” Science and Public Policy 20, (1993); and Cynthia Weston, “The Importance of Involving Experts and Learners in Formative Evaluation,” Canadian Journal of Educational Communication 16 (1987), 45–58.

    50. Margery Austin Turner and Wendy Zimmermann, “Role Playing,” in Handbook of Practical Program Evaluation, 2nd ed., ed. Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer (San Francisco: Jossey-Bass, 2004), 320–339; and Debra L. Dean, “How to Use Focus Groups,” in Handbook of Practical Program Evaluation ed. Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer (San Francisco: Jossey-Bass, 1994), 338–348.

    51. Kathryn E. Newcomer and Philip W. Wirtz, “Using Statistics in Evaluation,” in Handbook of Practical Program Evaluation, 2nd ed., ed. Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer (San Francisco: Jossey-Bass, 2004), 439–478.

    52. Blalock, An Introduction to Social Research; Hays, Statistics; and Greene, Econometric Analysis.

    53. Newcomer and Wirtz, “Using Statistics in Evaluation.”

    54. Rossi, Freeman, and Lipsey, Evaluation.

    55. Leigh Burstein, Howard Freeman, and Peter H. Rossi, eds., Collection Evaluation Data (Newbury Park, Calif.: Sage, 1985).

    56. Newcomer and Wirtz, “Using Statistics in Evaluation.”

    57. Floyd J. Fowler, Survey Research Methods: Applied Social Research Methods (Thousand Oaks, Calif.: Sage, 1993); Arlene Fink and Jacqueline Kosecoff, How to Conduct Surveys: A Step-by-Step Guide (Newbury, Calif.: Sage, 1985); and Seymour Sudman and Norman M. Bradburn, Asking Questions: A Practical Guide to Questionnaire Design (San Francisco: Jossey Bass, 1986).

    58. Fitzpatrick, Sanders, and Worthen, Program Evaluation.

    59. Caudle, “Qualitative Data Analysis.”

    60. Berk and Rossi, Thinking about Program Evaluation.

    61. Edward R. Tufte, The Visual Display of Quantitative Information (Cheshire, Conn.: Graphics Press, 1983); Edward R. Tufte, Envisioning Information (Cheshire, Conn.: Graphics Press, 1990); and Edward R. Tufte, Visual Explanations (Cheshire, Conn.: Graphics Press, 1997).

    62. Gerald E. Jones, How to Lie with Charts (San Jose, Calif: iUniverse, 2000).

    63. Blalock, An Introduction to Social Research; Hays, Statistics; and Greene, Econometric Analysis.

    64. Darrell Huff, How to Lie with Statistics (New York: W.W. Norton, 1982).

    65. John L. Phillips Jr., How to Think about Statistics (New York: W.H. Freeman, 1996).

    66. Wholey, Hatry, and Newcomer, Handbook of Practical Program Evaluation; Wholey, Hatry, and Newcomer, Handbook of Practical Evaluation, 2d ed.; Rossi, Freeman, and Lipsey, Evaluation; Fitzpatrick, Sanders, and Worthen, Program Evaluation; and McDavid and Hawthorn, Program Evaluation and Performance Measurement.

    67. Newcomer and Wirtz, “Using Statistics in Evaluating.”

    68. James Edwin Kee, “Cost-Effectiveness and Cost-Benefit Analysis,” in Handbook of Practical Program Evaluation, 2nd ed., ed. Joseph Wholey, Harry P. Hatry, and Kathryn E. Newcomer (San Francisco: Jossey-Bass, 2004), 506–541; Henry M. Levin, Cost Effectiveness Primer (Newbury Park, Calif.: Sage, 1983); Dale E. Berger, “Using Regression Analysis,” in Handbook of Practical Program Evaluation, 2nd ed., ed. Joseph Wholey, Harry P. Hatry, and Kathryn E. Newcomer (San Francisco: Jossey-Bass, 2004), 479–505; and Lawrence B. Mohr, Impact Analysis for Program Evaluation, 2nd ed. (Thousand Oaks, Calif.: Sage, 1995).

    69. McLaughlin and Jordan, “Using Logic Models;” and Office of Management and Budget, “Program Assessment Rating Tool,” http://www.whitehouse.gov/omb/part/index.html.

    Chapter Four

    1. Rudyard Kipling, The Just-So Stories (London, England: Random House, 1902).

    2. Peter J. Haas and J. Fred Springer, Applied Policy Research: Concepts and Cases (New York: Garland, 1998), 161–184.

    3. Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer, eds., Handbook of Practical Program Evaluation (San Francisco: Jossey-Bass, 2004).

    4. Steven Cohen and Ronald Brand, Total Quality Management in Government (San Francisco: Jossey-Bass, 1993).

    5. Wholey, Hatry, and Newcomer, Handbook of Practical Program Evaluation.

    6. Peter Drucker, The Effective Executive (New York: Harper & Row, 1966).

    Chapter Five

    1. Arthur Bloch, Murphy's Law: The 26th Anniversary Edition (New York: Penguin, 2003).

    2. Klaus Mainzer, Thinking in Complexity (Berlin, Germany: Springer, 1996).

    3. Stephen E. Ambrose, Eisenhower: Soldier, General of the Army, President-Elect, 1890–1952 (New York: Simon and Schuster, 1983).

    4. Roger Fisher and William Ury, Getting to Yes (New York: Houghton Mifflin, 1981).

    Chapter Six

    1. Rudolf Flesch, The Art of Readable Writing (New York: Harper and Row, 1974).

    2. Eugene J. McCarthy, “An Indefensible War,” in Great American Speeches, ed. Gregory R. Suriano (New York: Gramercy Books, 1993), 244–248.

    3. Barbara Jordan, “On the Impeachment of the President,” in Great American Speeches, ed. Gregory R. Suriano (New York: Gramercy Books, 1993), 281–286.


    • Loading...
Back to Top

Copy and paste the following HTML into your website