High Impact Internal Evaluation: A Practitioner's Guide to Evaluating and Consulting Inside Organizations

Books

Richard C. Sonnighsen

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • Copyright

    View Copyright Page

    Dedication

    This book is dedicated to my wife and best friend Sally in thanks for her love and friendship.

    Preface

    Unbeknownst to me at the time, the germination of this book began in the fall of 1974, when I was transferred from a field office assignment to the Office of Planning and Evaluation (OPE) at Federal Bureau of Investigation (FBI) headquarters in Washington, D.C. This new assignment began my evaluation experience. For the next five years, I evaluated programs, studied policy issues, observed how large organizations functioned, and became actively involved in the implementation of evaluation recommendations that called for a major revision of the FBI's approach to its investigative activities. My fascination with organizational structure, policy formulation, organizational change, information distribution dynamics, personnel behavior and management, and the value of independent evaluative activities and their affect on organizations, begun over two decades ago, continues to this day and is the primary rationale for engaging in this writing exercise.

    In 1979, I was assigned as the assistant special agent in charge (ASAC) of the FBI's San Francisco office for two years, then after a tour of duty on the FBI inspection staff, I was appointed director of OPE in 1982. As I began the task of managing OPE, I was struck by the lack of training opportunities and available information on internal evaluation. Although the most conspicuous taxonomic distinction in the field of evaluation is internal/external, I discovered that a dearth of material existed about the theory and practice of internal evaluation. Inquiry among evaluation professionals led me to conclude that the majority of the evaluation literature was created by academics writing about their experiences as external evaluation contractors. Few incentives existed for internal evaluators to write about their experiences because organizational reward systems did not attach any significant value to authoring journal articles nor did most organizations wish to expose their internal operations to the outside world. Fortunately, during the past decade, some growth in the internal evaluation literature has occurred, albeit incremental, and this book is an attempt to build on that limited literature. One of the motivations for my commentary about internal evaluation is the hope that it will encourage other internal evaluators to write and publish their experiences and contribute to the development of an internal evaluation literature, a rapidly growing and significant segment of the evaluation community.

    The goal of this book is to present a philosophical, administrative, methodological, and ethical framework for the practice of internal evaluation that results in high-quality evaluations with positive impacts on organizations. The content of the book draws on my experience as an evaluator in the FBI for five years, as the director of the FBI's Office of Planning, Evaluation, and Audits (OPEA) for 12 years, my management and evaluation consulting activities, research I conducted during graduate school, and feedback from master's and doctoral students in evaluation classes I taught at the University of Southern California's Washington Public Affairs Center (WPAC). This book also represents an accumulation of knowledge gathered from my association with scholars and practitioners in the American Evaluation Association (AEA) and the International Working Group on Evaluation (INTEVAL),1 who have freely shared their wisdom with me. My substantial debt to these colleagues is best expressed in the elegant prose of Henry David Thoreau, who wrote: “I think it will be found that he who speaks with most authority on a given subject is not ignorant of what has been said by his predecessors. He will take his place in a regular order, and substantially add his own knowledge to the knowledge of previous generations.”

    I owe a great deal of gratitude to numerous friends, relatives, and colleagues who have encouraged and advised me during my work and growth as an evaluator, organizational observer, and manager. Although I am fully responsible for the content and any errors in this book, without the guidance and camaraderie of these associates, this effort would have never begun or been completed. It was in graduate school at the University of Southern California, WPAC, where I began learning about the craft of evaluation from Joe Wholey, who graciously shared his time and extensive knowledge of evaluation with me. It was also at WPAC where Chris Bellavita challenged me to examine and begin to understand the rationale and theoretical implications behind my management behavior as an FBI executive. His thoughtful insights ignited my interest in organizational theory and human behavior and was instrumental in changing my conception of managing people. Without the friendly prodding of my FBI boss and friend John Glover, I never would have attended graduate school. His friendship, support, and joint attendance with me at WPAC helped make that a rewarding experience. Ray Rist, then at the General Accounting Office (GAO) and now at the World Bank, invited me to learn about the international aspects of evaluation by joining INTEVAL, which he chairs, and participating in its annual conferences. This 10-year association with a worldwide community of evaluation practitioners and scholars has expanded my evaluation horizons and cemented personal and professional friendships on five continents.

    Attendance at American and Canadian annual evaluation conferences for over a decade also contributed to my understanding of the craft of evaluation. My evaluation education was further supplemented by attending meetings of Washington Evaluators (WE), a group of evaluation practitioners in the Washington, D.C. area founded by Mike Hendricks, and “brown bag lunches” sponsored by Joe Wholey at WPAC, where evaluation practitioners and scholars discussed how to improve government programs and practice effective public management.

    I am substantially indebted to Arnold Love, Joe Wholey, and Ray Rist, who devoted considerable time, talent, and energy to reviewing a preliminary draft of this book. Their constructive criticism and valuable suggestions for improvement helped focus my thinking. I also received helpful comments from the following reviewers: Arnold Love, Robert T. Golembiewski, Michael Hendricks, Joseph S. Wholey, and Jonathan A. Morell. Over the past decade, Mike Patton has shared his evaluation expertise with me, offering his friendship, guidance, and astute counsel. His pragmatic approach to the use of evaluation has influenced my thinking and affected my practice. In addition to the persons mentioned above, other colleagues who have been helpful and influenced my evaluation career and the content of this book include Kathryn Newcomer, Mike Hendricks, Midge Smith, Ernie House, Pablo Guerrero, and my associates from AEA and INTEVAL.

    I also want to express my deep appreciation to the federal government evaluation office directors who helped in the research for this book. Chris Wye, Jake Barkdoll, Mike Mangano, Mike Dole, and Glenn Freeman gave me unrestricted access to their evaluation staffs, office documents, and agency personnel. My thanks also go to George Grob, Oliver Cummings, Racqueline Shelton, and Bob Diegelman, internal evaluation office directors who kindly updated me on their current evaluation practices. Thanks go to Sage editor C. Deborah Laughton, who offered me a book contract after reading a draft of the first chapter. The Sage commitment was an incentive to finish a writing project that was beginning to develop an inertia problem. I am also indebted to Kate Peterson at Sage. Her considerable copyediting skills made the book more readable.

    This book has been somewhat of a family affair. My daughter Susan Sonnichsen-Wilson and son-in-law David Wilson used their formidable computer talents to advise me how to overcome computer glitches; my other daughter, Jennifer Sonnichsen-Parker, designed the cover. My wife and best friend, Sally, has tolerated my many hours at the computer. Her love and friendship has created a family and home environment that I never imagined possible, enabling me to devote a significant amount of time to writing.

    Because of the insular nature of internal evaluation, the extent of its practice can only be estimated and its quality and impact subject to speculation. The development and evolution of internal evaluation practice will flourish only when its practitioners begin writing, publishing, and sharing their experiences and begin the public dialogue that is important for the advancement of any professional endeavor. Through public discussion and critique, the quality, impact, and influence of internal evaluation on organizations can be promulgated and debated, clarifying some of the mystery surrounding its practice. I hope that this book will stimulate other internal evaluation practitioners to share their expertise and experiences.

    For those who practice the craft of internal evaluation, we are crossing a threshold into exciting times. The demand for evaluative services is growing, and the practice of internal evaluation is undergoing a paradigmatic reframing. As we approach the 21st century, the advent of the knowledge-based organization, the demand for articulate and measurable performance data, and the information technology revolution have combined to create enormous opportunities for internal evaluators. Producing useful knowledge and constructively evaluating and consulting inside organizations will enable internal evaluators to contribute to improved decision making and more productive, effective, and efficient organizations. This book outlines some of the approaches to these tasks that will aid in the recognition of the value of evaluative efforts, the building of evaluation capacity, and the eventual institutionalization of evaluation as an essential component in modern organizations.

    RichardC.Sonnighsen
    Note

    1. INTEVAL is a group of international evaluation scholars and practitioners from North America, Europe, and Asia who conduct evaluation research and publish the results in books. They began meeting in 1986 under the auspices of the International Institute of Administrative Sciences (IIAS). In 1997, they became independent but continue their research and publishing activities.

  • References

    Adams, K.A. (1985). Gamesmanship for internal evaluators: Knowing when to “hold “em” and when to “fold” em.” Evaluation and Program Planning, 853–57. http://dx.doi.org/10.1016/0149-7189%2885%2990020-5
    American Evaluation Association. (1995). Guiding principles for evaluators. In W.R.Shadish, D.L.Newman, M.A.Scheirer, & C.Wye (Eds.), Guiding principles for evaluators (New Directions for Program Evaluation, No. 66). San Francisco: Jossey-Bass.
    Argyris, C. (1982). Reasoning, learning, and action. San Francisco: Jossey-Bass.
    Attkisson, C.C., Brown, T.R., & Hargreaves, W.A. (1978). Roles and functions of evaluation in human service programs. In C.C.Attkisson, W.A.Hargreaves, M.J.Harowitz, & J.E.Soresen (Eds.), Evaluation of human service programs. New York: Academic Press.
    Barkdoll, G.L. (1982). Increasing the impact of program evaluation by altering the working relationship between the program manager and the evaluator. Unpublished doctoral dissertation, University of Southern California, Los Angeles.
    Barkdoll, G.L. (1985). Type III evaluations: Consultation and consensus. In E.Chelimsky (Ed.), Program evaluation: Patterns and directions. Washington, DC: American Society for Public Administration.
    Barkdoll, G.L., & Sporn, D.L. (1989). Five strategies for successful in-house evaluations. In J.S.Wholey, K.E.Newcomer, & Associates (Eds.), Improving government performance: Evaluation strategies for strengthening public agencies and programs. San Francisco: Jossey-Bass.
    Barrados, M., & Divorski, S. (1996). Evaluation in the federal government. In Report of the auditor general of Canada to the House of Commons. Ottawa: Minister of Public Works and Government Services Canada.
    Bartunek, J.M., & Louis, M.R. (1996). Insider/outsider team research. Thousand Oaks, CA: Sage.
    Bellavita, C. (1986). Communicating effectively about performance is a purposive activity. In J.S.Wholey, M.A.Abramson, & C.Bellavita (Eds.), Performance and credibility: Developing excellence in public and nonprofit organizations. Lexington, MA: Lexington Books.
    Bellavita, C., Wholey, J.S., & Abramson, M.A. (1986). Performance-oriented evaluation: Prospects for the future. In J.S.Wholey, M.A.Abramson, & C.Bellavita (Eds.), Performance and credibility: Developing excellence in public and nonprofit organizations. Lexington, MA: Lexington Books.
    Bickman, L. (1989). Barriers to the use of program theory. Evaluation and Program Planning, 12387–390. http://dx.doi.org/10.1016/0149-7189%2889%2990056-6
    Blake, R.R., & Mouton, J.S. (1988). Comparing strategies for incremental and transformational change. In R.H.Kilmann, T.J.Covin, & Associates, Corporate transformation. San Francisco: Jossey-Bass.
    Boyle, R. (1993). Making evaluation relevant. Dublin, Ireland: Institute of Public Administration.
    Boyle, R. (1999). Professionalizing the evaluation function: Human resource development and the building of evaluation capacity. In R.Boyle & D.Lemaire (Eds.), Building effective evaluation capacity: Lessons from practice. New Brunswick, NJ: Transaction Publishing.
    Boyle, R., Lemaire, D., & Rist, R. (1999). Introduction: Building effective evaluation capacity. In R.Boyle & D.Lemaire (Eds.), Building effective evaluation capacity: Lessons from practice. New Brunswick, NJ: Transaction Publishing.
    Bozeman, B., & Bretschneider, S. (1994). The “publicness puzzle” in organization theory: A test of alternative explanations of differences between public and private organizations. Journal of Public Administration Research and Theory, 4197–293.
    Braskamp, L.A., Brandenburg, D.C., & Ory, J.C. (1987). Lessons about clients’ expectations. In J.Nowakowski (Ed.), The client perspective on evaluation (New Directions for Program Evaluation, No. 36). San Francisco: Jossey-Bass.
    Brookfield, S.D. (1987). Developing critical thinkers: Challenging adults to explore alternative ways of thinking and acting. San Francisco: Jossey-Bass.
    Broskowski, A., & Driscoll, J. (1978). The organizational context of program evaluation. In C.C.Attkisson, W.A.Hargreaves, M.J.Harowitz, & J.E.Sorensen (Eds.), Evaluation of human service programs. New York: Academic Press.
    Burrell, G., & Morgan, G.L. (1979). Sociological paradigms and organizational analysis. Portsmouth, NH: Heinemann.
    Butler, S.M., Sanera, M., & Weinrad, WB. (Eds.). (1984). Mandate for leadership 11: Continuing the conservative revolution. Washington, DC: Heritage Foundation.
    Callahan, R.E., Watkins, J.F., & Carr, K.D. (1994, November). Internal evaluators as change agents: An operational approach for improving organizational performance. Paper presented at the annual meeting of the American Evaluation Association, Boston.
    Chelimsky, E. (1977). Analysis of a symposium on the use of evaluation by federal agencies (Vol. 2). McLean, VA: MITRE.
    Chelimsky, E. (1985). Old patterns and new directions in program evaluation. In E.Chelimsky (Ed.), Program evaluation: Patterns and directions. Washington, DC: American Society for Public Administration.
    Chelimsky, E. (1992). Executive branch program evaluation: An upturn soon? In C.G.Wye & R.C.Sonnichsen (Eds.), Evaluation in the federal government: Changes, trends, and opportunities (New Directions for Program Evaluation, No. 55). San Francisco: Jossey-Bass.
    Chelimsky, E. (1997). The coming transformations in evaluation. In E.Chelimsky & W.R.Shadish (Eds.), Evaluation for the 21st century. Thousand Oaks, CA: Sage.
    Chen, H.T. (1989). Issues in the theory-driven perspective. Evaluation and Program Planning, 12299–306. http://dx.doi.org/10.1016/0149-7189%2889%2990046-3
    Chen, H.T., & Rossi, PH. (1987). The theory-driven approach to validity. Evaluation Review, 795–103.
    Clifford, D.L., & Sherman, P. (1983). Internal evaluation: Integrating program evaluation and management. In A.J.Love (Ed.), Developing effective internal evaluation (New Directions for Program Evaluation, No. 20). San Francisco: Jossey-Bass.
    Cook, T.D., & Campbell, D.T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Boston: Houghton Mifflin.
    Cordray, D.S. (1989). Optimizing validity in program research: An elaboration of Chen and Rossi's theory-driven approach. Evaluation and Program Planning, 12379–385. http://dx.doi.org/10.1016/0149-7189%2889%2990055-4
    Creswell, J.W. (1994). Research design: Qualitative and quantitative approaches. Thousand Oaks, CA: Sage.
    Cummings, T.G., Mohrman, S.A., Mohrman, A.M., Jr., & Ledford, G.E., Jr. (1985). Organization design for the future: A collaborative research approach. In E.E.LawlerIII, A.M.Mohrman, Jr., S.A.Mohrman, G.E.Ledford, Jr., & T.G.Cummings (Eds.), Doing research that is useful for theory and practice (pp. 275–323). San Francisco: Jossey-Bass.
    Datta, L. (1997). A pragmatic basis for mixed-method designs. In J.C.Greene & V.J.Caracelli (Eds.), Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms (New Directions for Evaluation, No. 74). San Francisco: Jossey-Bass.
    Davis, H.R., & Salasin, S.E. (1975). The utilization of evaluation. In E.L.Struening & M.Guttentag (Eds.), Handbook of evaluation research (Vol. 1, pp. 621–666). Beverly Hills, CA: Sage.
    Dean, D.L. (1994). How to use focus groups. In J.S.Wholey, H.P.Hatry, & K.E.Newcomer (Eds.), Handbook of practical program evaluation. San Francisco: Jossey-Bass.
    Derlien, H. (1990). Genesis and structure of valuation efforts in comparative perspective. In R.C.Rist (Ed.), Program evaluation and the management of government: Patterns & prospects across eight nations. New Brunswick, NJ: Transaction Publishing.
    Downs, A. (1977). Some thoughts on giving people economic advice. In F.G.Caro (Ed.), Readings in evaluation research. New York: Russell Sage.
    Drucker, P.F. (1986). The frontiers of management: When tomorrow's decisions are being shaped today. New York: Truman Talley Books/Dutton.
    Drucker, P.F. (1989). The new realities. New York: Harper & Row.
    Duffy, B.P. (1993). Focus groups: An important research technique for internal evaluation units. Evaluation Practice, 14133–139. http://dx.doi.org/10.1016/0886-1633%2893%2990003-8
    Evered, R., & Louts, M.R. (1981). Alternative perspectives in the organizational sciences: “Inquiry from the inside” and “inquiry from the outside.” Academy of Management Review, 6385–395.
    Farley, J. (1991). Sociotechnical theory: An alternative framework for evaluation. C.L.Larson & H.Preskill (Eds.), Organizations in transition: Opportunities and challenges for evaluation (New Directions for Program Evaluation, No. 49, pp. 51–62). San Francisco: Jossey-Bass.
    Fetterman, D.M. (1997). Empowerment evaluation and accreditation in higher education. In E.Chelimsky & W.R.Shadish (Eds.), Evaluation for the 21st century: A handbook. Thousand Oaks, CA: Sage.
    Fishman, D.B. (1991). An introduction to the experimental versus the pragmatic paradigm in evaluation. Evaluation and Program Planning, 14353–363. http://dx.doi.org/10.1016/0149-7189%2891%2990018-C
    Fleischer, M. (1983). The evaluator as program consultant. Evaluation and Program Planning, 669–76. http://dx.doi.org/10.1016/0149-7189%2883%2990046-0
    Forss, K., Cracknell, B., & Samset, K. (1994). Can evaluation help an organization to learn?Evaluation Review, 18574–591. http://dx.doi.org/10.1177/0193841X9401800503
    General Accounting Office. (1987). Federal evaluation: Fewer units, reduced resources, different studies from 1980 (PEMD-87-9). Washington, DC: Author.
    General Accounting Office. (1988). Program evaluation issues (GAO Transition Series). Washington, DC: Author.
    General Accounting Office. (1991). Designing evaluations (GAO/PEMD-10.1.4). Washington, DC: Author.
    General Accounting Office. (1997). The Government Performance and Results Act (GAO/GGD 97–109). Washington, DC: Author.
    General Accounting Office. (1998). Program evaluation: Agencies challenged by demand for information on program results (GAO/GGD-98-53). Washington, DC: Author.
    Gladis, S.D. (1988). FBI Evaluation Workshop, Wintergreen, VA.
    Gladis, S.D. (1989). Process writing: A systematic writing strategy. Amherst, MA: Human Resource Development.
    Greene, J.C., & Caracelli, V.J. (1997). Editors’ notes. In J.C.Greene & V.J.Caracelli (Eds.), Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms (New Directions for Evaluation, No. 74). San Francisco: Jossey-Bass.
    Gunn, W.J. (1987). Client concerns and strategies in evaluation studies. In J.Nowak-owski (Ed.), The client perspective on evaluation (New Directions for Program Evaluation, No. 36). San Francisco: Jossey-Bass.
    Gurel, L. (1975). The human side of evaluating human services programs: Problems and prospects. In M.Guttentag & E.Struening (Eds.), Handbook of evaluation research (Vol. 2., pp. 11–28). Beverly Hills, CA: Sage.
    Hansson, F. (1997). Critical comments on evaluation research in Denmark. In E.Chelimsky & W.R.Shadish (Eds.), Evaluation for the 21st century. Thousand Oaks, CA: Sage.
    Hendricks, M. (1994). Making a splash: Reporting evaluation results effectively. In J.S.Wholey, H.P.Hatty, & K.E.Newcomer (Eds.), Handbook of practical program evaluation (pp. 549–575). San Francisco: Jossey-Bass.
    Hendricks, M., & Handley, E. (1990). Improving the recommendations from evaluation studies. Evaluation and Program Planning, 13109–117. http://dx.doi.org/10.1016/0149-7189%2890%2990038-X
    Henry, G.T. (1997). Introduction. In G.T.Henry (Ed.), Creating effective graphs: Solutions for a variety of evaluation data (New Directions for Program Evaluation, No. 73). San Francisco: Jossey-Bass.
    Hersey, P., & Blanchard, K. (1982). Management of organizational behavior: Utilizing human resources. Englewood Cliffs, NJ: Prentice Hall.
    Hirschman, A.O. (1970). Exit, voice and loyalty. Cambridge, MA: Harvard University Press.
    Honea, G.E. (1992). Ethics and public sector evaluators: Nine case studies. Unpublished doctoral dissertation, University of Virginia, Charlottesville.
    Horst, P., Nay, J.N., Scanlon, J.W., & Wholey, J.S. (1977). Program manager and the federal evaluator. In F.G.Caro (Ed.), Readings in evaluation research (
    2nd ed.
    ). New York: Russell Sage.
    House, E.R. (1986). Internal evaluation. Evaluation Practice, 943–46http://dx.doi.org/10.1016/S0886-1633%2888%2980045-X
    House, E.R. (1990). An ethics of qualitative field studies. In E.G.Guba (Ed.), The paradigm dialog. Newbury Park, CA: Sage.
    House, E.R. (1997). Evaluation in the government marketplace. Evaluation Practice, 1837–48. http://dx.doi.org/10.1016/S0886-1633%2897%2990006-4
    Huse, E.F., & Cummings, T.G. (1985). Organization development and change. St. Paul, MN: West.
    Indrebo, A.M. (1997, November). The self-evaluating organization: A study of collaborative evaluation and organizational learning in schools. Paper presented at the annual meeting of the American Evaluation Association, San Diego, CA.
    Jenlink, P.M. (1994). Using evaluation to understand the learning architecture of an organization. Evaluation and Program Planning, 17315–325. http://dx.doi.org/10.1016/0149-7189%2894%2990011-6
    Joint Committee on Standards for Educational Evaluation. (1994). Program evaluation standards (
    2nd ed.
    ). Thousand Oaks, CA: Sage.
    Kaplan, A. (1964). The conduct of inquiry. New York: Harper & Row.
    Katz, D., & Kahn, R. (1978). The social psychology of organizations (
    2nd ed.
    ). New York: John Wiley.
    Kennedy, M.M. (1983). The role of the in-house evaluator. Evaluation Review, 7519–541. http://dx.doi.org/10.1177/0193841X8300700406
    Kiefer, C.F., & Stroh, P. (1984). A new paradigm for developing organizations. In J.D.Adams (Ed.), Transforming work. Alexandria, VA: Miles River.
    Kingsbury, N., & Hedrick, T.E. (1994). Evaluator training in a government setting. In J.W.Altschuld & M.Engle (Eds.), The preparation of professional evaluators: Issues, perspectives, and programs (New Directions for Program Evaluation, No. 62, pp. 61–70). San Francisco: Jossey-Bass.
    Krathwohl, D.R. (1985). Social and behavioral science research: A new framework for conceptualizing, implementing, and evaluating research studies. San Francisco: Jossey-Bass.
    Ledford, G.E., Mohrman, S.A., Mohrman, A.M., & Lawler, E.E. (1989). The phenomenon of large-scale organizational change. In A.M.Mohrman, S.A.Mohrman, G.E.Ledford, Jr., T.G.Cummings, & E.E.LawlerIII (Eds.), Large-scale organizational change. San Francisco: Jossey-Bass.
    Leeuw, F.L., & Sonnichsen, R.C. (1994). Evaluation and organizational learning: International perspectives. In F.L.Leeuw, R.C.Rist, & R.C.Sonnichsen (Eds.), Can governments learn? Comparative perspectives on evaluation & organizational learning. New Brunswick, NJ: Transaction Publishing.
    Leviton, L.C., & Hughes, E.F.X. (1981). Research on the utilization of evaluations: A review and syntheses. Evaluation Review, 5525–548. http://dx.doi.org/10.1177/0193841X8100500405
    Love, A.J. (Ed.). (1983a). Developing effective internal evaluation (New Directions for Program Evaluation, No. 20). San Francisco: Jossey-Bass.
    Love, A.J. (1983b). Editor's notes. In A.J.Love (Ed.), Developing effective internal evaluation (New Directions for Program Evaluation, No. 20). San Francisco: Jossey-Bass.
    Love, A.J. (1983c). The organizational context and the development of internal evaluation. In A.J.Love (Ed.), Developing effective internal evaluation (New Directions for Program Evaluation, No. 20). San Francisco: Jossey-Bass.
    Love, A.J. (1991). Internal evaluation: Building organizations from within. Newbury Park, CA: Sage.
    Marsella, A.J., & Yang, A.L. (1983). Personality research: Anxiety, aggression, and locus of control. In R.J.Corsini & A.J.Marsella (Eds.), Personality theories, research, & assessment. Itasca, IL: F. E. Peacock.
    Mathison, S. (1991). What do we know about internal evaluation?Evaluation and Program Planning, 14159–165. http://dx.doi.org/10.1016/0149-7189%2891%2990051-H
    Mayne, J. (1992a). Establishing internal evaluation in an organization. In J.Hudson, J.Mayne, & R.Thomlison (Eds.), Action-oriented evaluation in organizations: Canadian practices. Toronto, Ontario: Wall & Emerson.
    Mayne, J. (1992b). Institutionalizing program evaluation. In J.Hudson, J.Mayne, & R.Thomlison (Eds.), Action-oriented evaluation in organizations: Canadian practices. Toronto, Ontario: Wall & Emerson.
    Mayne, J. (1994). Utilizing evaluation in organizations: The balancing act. In F.L.Leeuw, R.C.Rist, & R.C.Sonnichsen (Eds.), Can governments learn? Comparative perspectives on evaluation & organizational learning. New Brunswick, NJ: Transaction Publishing.
    McQueen, C. (1992). Program evaluation in the Canadian federal government. In J.Hudson, J.Mayne, & R.Thomlison (Eds.), Action-oriented evaluation in organizations: Canadian practices (pp. 28–47). Toronto, Ontario: Wall & Emerson.
    McWeeney, T. (1995, January 30–31). Central Michigan University performance measures law enforcement seminar, Washington, DC.
    Mertens, D.M. (1994). Training evaluators: Unique skills and knowledge. In J.W.Altschuld & M.Engle (Eds.), The preparation of professional evaluators: Issues, perspectives, and programs (New Directions for Program Evaluation, No. 62, pp. 17–27). San Francisco: Jossey-Bass.
    Merton, R.K. (1972). Insiders and outsiders: A chapter in the sociology of knowledge. American Journal of Sociology, 78 (1), 9–47. http://dx.doi.org/10.1086/225294
    Minnett, A.M. (1997, November). The internal evaluator's role in self-reflective organizations: One nonprofit agency's model. Paper presented at the annual meeting of the American Evaluation Association, San Diego, CA.
    Mohr, L.B. (1988). Impact analysis for program evaluation. Chicago: Dorsey.
    Morgan, G. (1983). Beyond method. Beverly Hills, CA: Sage.
    Morgan, G. (1986). Images of organization. Beverly Hills, CA: Sage.
    Morris, M. (1994). The role of single evaluation courses in evaluation training. In J.W.Altschuld & M.Engle (Eds.), The preparation of professional evaluators: Issues, perspectives, and programs (New Directions for Program Evaluation, No. 62, pp. 51–59). San Francisco: Jossey-Bass.
    Morris, M., & Cohn, R. (1993). Program evaluation and ethical challenges: A national survey. Evaluation Review, 17621–642. http://dx.doi.org/10.1177/0193841X9301700603
    Nadler, D.A., & Tushman, M.L. (1992). Designing organizations that have good fit: A framework for understanding new architectures. In D.A.Nadler, M.S.Gerstein, R.B.Shaw, & Associates, Organizational architecture. San Francisco: Jossey-Bass.
    Naisbitt, J. (1982). Megatrends. New York: Warner.
    Newman, D.L., & Brown, R.D. (1996). Applied ethics for program evaluation. Thousand Oaks, CA: Sage.
    Nicoll, D. (1984). Consulting to organizational transformations. In J.D.Adams (Ed.), Transforming work. Alexandria, VA: Miles River.
    North, D.C. (1990). Institutions, institutional change and economic performance. Cambridge, UK: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511808678
    Oman, R.C. (1989). Process dimensions in program evaluation. In G.Barkdoll & J.Bell (Eds.), Evaluation and the federal decision maker (New Directions for Program Evaluation, No. 41). San Francisco: Jossey-Bass.
    Patton, M.Q. (1986). Utilization-focused evaluation (
    2nd ed.
    ). Beverly Hills, CA: Sage.
    Patton, M.Q. (1988). The evaluator's responsibility for utilization. Evaluation Practice, 9 (2), 5–24. http://dx.doi.org/10.1016/S0886-1633%2888%2980059-X
    Patton, M.Q. (1989a). A context and boundaries for a theory-driven approach to validity. Evaluation and Program Planning, 12375–377. http://dx.doi.org/10.1016/0149-7189%2889%2990054-2
    Patton, M.Q. (1989b). FBI evaluation training workshop, Quantico, VA.
    Patton, M.Q. (1990). Qualitative evaluation and research methods (
    2nd ed.
    ). Newbury Park, CA: Sage.
    Patton, M.Q. (1994). Development evaluation. Evaluation Practice, 15 (3), 311–319. http://dx.doi.org/10.1016/0886-1633%2894%2990026-4
    Patton, M.Q. (1997). Utilization-focused evaluation (
    3rd ed.
    ). Thousand Oaks, CA: Sage.
    Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage.
    Peters, T. (1987). Thriving on chaos: Handbook for a management revolution. New York: Knopf.
    Peters, T. (1992). Liberation management: Necessary disorganization for the nanosecond nineties. New York: Knopf.
    Pettigrew, M. (1996). Evaluation in the UK. European Evaluation Newsletter, 3.
    Pfeffer, J. (1978). Organizational design. Arlington Heights, IL: AHM.
    Pfeffer, J. (1981). Power in organizations. Boston: Pitman.
    Phillips, D.C. (1987). Philosophy, science, and social inquiry. Oxford: Pergamon.
    Preskill, H. (1991). The cultural lens: Bringing utilization into focus. In C.L.Larson & H.Preskill (Eds.), Organizations in transition: Opportunities and challenges for evaluation (New Directions for Program Evaluation, No. 49, pp. 5–16). San Francisco: Jossey Bass.
    Preskill, H. (1994). Evaluation's role in enhancing organizational learning: A model for practice. Evaluation and Program Planning, 17 (3), 291–297. http://dx.doi.org/10.1016/0149-7189%2894%2990008-6
    Preskill, H., & Caracelli, V. (1997). Current and developing conceptions of use: Evaluation use TIG survey results. Evaluation Practice, 18 (3), 209–225. http://dx.doi.org/10.1016/S0886-1633%2897%2990028-3
    Radin, B.A. (1987). The organization and its environment: What difference do they make? In J.S.Wholey (Ed.), Organizational excellence: Stimulating quality and communicating value. Lexington, MA: Lexington Books.
    Rainey, H.G. (1991). Understanding and managing public organizations. San Francisco: Jossey-Bass.
    Raven, B.H., & Kruglanski, A.W. (1970). Conflict and power. In P.G.Swingle (Ed.), The structure of conflict. New York: Academic Press.
    Read, W. (1962). Upward communication in industrial hierarchies. Human Relations, 153–16. http://dx.doi.org/10.1177/001872676201500101
    Reichardt, C.S. (1994). Summative evaluation, formative evaluation, and tactical research. Evaluation Practice, 15275–281. http://dx.doi.org/10.1016/0886-1633%2894%2990022-1
    Reichardt, C.S., & Cook, T.D. (1979). Beyond qualitative versus quantitative methods. In T.D.Cook & C.S.Reichardt (Eds.), Qualitative and quantitative methods in evaluation research (pp. 7–32). Beverly Hills, CA: Sage.
    Richardson, E.L. (1992). The value of evaluation. In C.G.Wye & R.C.Sonnichsen (Eds.), Evaluation in the federal government: Changes, trends, and opportunities (New Directions for Program Evaluation, No. 55, pp. 15–20). San Francisco: Jossey-Bass.
    Rist, R.C. (1990). On the application of program evaluation designs: Sorting out their use and abuse. In R.C.Rist (Ed.), Policy and program evaluation: Perspectives on design and utilization, Brussels: Institute of Administrative Sciences.
    Rist, R.C. (1994). The preconditions for learning: Lessons from the public sector. In F.L.Leeuw, R.C.Rist, & R.C.Sonnichsen (Eds.), Can governments learn? Comparative perspectives on evaluation & organizational learning. New Brunswick, NJ: Transaction Publishing.
    Robertson, P.J., & Seneviratne, S.J. (1995). Outcomes of planned organizational change in the public sector: A meta-analytic comparison to the private sector. Public Administration Review, 55547–558. http://dx.doi.org/10.2307/3110346
    Rossi, P.H., & Freeman, H.E. (1989). Evaluation: A systematic approach (
    4th ed.
    ). Newbury Park, CA: Sage.
    Rutman, L. (1980). Planning useful evaluations: Evaluability assessment. Beverly Hills, CA: Sage.
    Rymph, D.B. (1989). Evaluation in the Department of Housing and Urban Development, 1980–1988. Evaluation Practice, 1030–39. http://dx.doi.org/10.1016/S0886-1633%2889%2980051-0
    Rymph, D.B. (1992). Evaluation in the Department of Housing and Urban Development. In C.G.Wye & R.C.Sonnichsen (Eds.), Evaluation in the federal government: Changes, trends, and opportunities (New Directions for Program Evaluation, No. 55). San Francisco: Jossey-Bass.
    Sanders, J.R. (1995). Standards and principles. In W.R.Shadish, D.L.Newman, M.A.Scheirer, & C.Wye (Eds.), Guiding principles for evaluators (New Directions for Program Evaluation, No. 66). San Francisco: Jossey-Bass.
    Sanera, M. (1984). Implementing the mandate. In S.M.Butler, M.Sanera, & W.B.Weinrad (Eds.), Mandate for leadership II: Continuing the conservative revolution. Washington, DC: Heritage Foundation.
    Schon, D.A. (1971). Beyond the stable state. New York: Norton.
    Scriven, M. (1991). Evaluation thesaurus (
    4th ed.
    ). Newbury Park, CA: Sage.
    Scriven, M. (1993). The nature of evaluation. In M.Scriven (Ed.), Hard-won lessons in program evaluation (New Directions for Program Evaluation, No. 58). San Francisco: Jossey-Bass.
    Scriven, M. (1995). The logic of evaluation and evaluation practice. In D.M.Founier (Ed.), Reasoning in evaluation: Inferential links and leaps (New Directions for Program Evaluation, No. 68). San Francisco: Jossey-Bass.
    Scriven, M. (1996). The theory behind practical evaluation. Evaluation, 2 (4), 393–404. http://dx.doi.org/10.1177/135638909600200403
    Scriven, M. (1997). Truth and objectivity in evaluation. In E.Chelimsky & W.Shadish (Eds.), Evaluation for the 21st century. Thousand Oaks, CA: Sage.
    Segsworth, R.V. (1994). Downsizing and program evaluation: An assessment of the experience in the government of Canada. In R.Bernier & J.Gow (Eds.), Un etat reduit? [A down-sized state?] (pp. 249–259). Sainte-Foy, Quebec: University of Quebec Press.
    Senge, P.M. (1990). The fifth discipline: The art and practice of the learning organization. New York: Doubleday.
    Shadish, W.R., Cook, T.D., & Leviton, L.C.(1991). Foundations of program evaluation: Theories of practice. Newbury Park, CA: Sage.
    Shadish, W.R., Newman, D.L., Scheirer, M.A., & Wye, C. (1995). Developing the guiding principles. In W.R.Shadish, D.L.Newman, M.A.Scheirer, & C.Wye (Eds.), Guiding principles for evaluators (New Directions for Program Evaluation, No. 66). San Francisco: Jossey-Bass.
    Simon, H.A. (1976). Administrative behavior (
    3rd ed.
    ). New York: Free Press.
    Sommetlad, E. (1995). The UK “voluntary” sector and the emergent role of the practitioner evaluator in NGO's [Review of Evaluating ourselves]. Evaluation, 1107–110.
    Sonnichsen, R.C. (1988). Advocacy evaluation: A model for internal evaluation offices. Evaluation and Program Planning, 11141–148. http://dx.doi.org/10.1016/0149-7189%2888%2990005-5
    Sonnichsen, R.C. (1989a). Advocacy evaluation: A strategy for organizational improvement. Knowledge: Creation, Diffusion, Utilization, 10 (4).
    Sonnichsen, R.C. (1989b). An open letter to Ernest House. Evaluation Practice, 10 (3), 59–63. http://dx.doi.org/10.1016/S0886-1633%2889%2980014-5
    Sonnichsen, R.C. (1989c). Program managers: Victims or victors in the evaluation process. In G.L.Barkdoll & J.B.Bell (Eds.), Evaluation and the federal decision maker (New Directions for Program Evaluation, No. 41). San Francisco: Jossey-Bass.
    Sonnichsen, R.C. (1990). Organizational learning and the environment of evaluation. In C.Bellavita (Ed.), How public organizations work: Learning from experience. New York: Praeger.
    Sonnichsen, R.C. (1991). Characteristics of high impact internal evaluation offices. Unpublished doctoral dissertation, University of Southern California, Department of Public Administration.
    Stame, N. (1998, October). Evaluation in Italy: Evaluation as a tool of government (Newsletter No. 2/98). Stockholm, Sweden: European Evaluation Society.
    Stiglitz, J.E. (1994). Whither socialism?Cambridge, MA: MIT Press.
    Tanji, J.M. (1993). Wishing for a one-armed expert: Recommendations as a philosophical attitude. Evaluation and Program Planning, 16149–152. http://dx.doi.org/10.1016/0149-7189%2893%2990026-5
    Tarnas, R. (1991). The passion of the Western mind: Understanding the ideas that have shaped our world view. New York: Ballantine.
    Tate, D.L., & Cummings, O.W. (1991). Promoting evaluations with management. In C.L.Larson & H.Preskill (Eds.), Organizations in transition: Opportunities and challenges for evaluation (New Directions for Program Evaluation, No. 49). San Francisco: Jossey-Bass.
    Toffler, A. (1981). The third wave. New York: Bantam.
    Toffler, A. (1990). Powershift. New York: Bantam.
    Torres, R.T. (1991). Improving the quality of internal evaluation: The evaluator as consultant-mediator. Evaluation and Program Planning, 14189–198. http://dx.doi.org/10.1016/0149-7189%2891%2990055-L
    Torres, R.T., Preskill, A.S., & Piontek, M.E. (1996). Evaluation strategies for communicating and reporting: Enhancing learning in organizations. Thousand Oaks, CA: Sage.
    Treadwell, W.A. (1995). Fuzzy set theory movement in the social sciences. Public Administration Review, 5591–98. http://dx.doi.org/10.2307/976831
    Tushman, M.L., Newman, W.H., & Nadler, D.A. (1988). Executive leadership and organizational evolution: Managing incremental and discontinuous change. In R.H.Kilmann, T.J.Covin, & Associates (Eds.), Cooperate transformation: Revitalizing organizations for a competitive world. San Francisco: Jossey-Bass.
    U.S. Department of Health and Human Services, Office of Inspector General. (1989). Financial arrangements between physicians and health care businesses. Report to Congress. Washington, DC: Government Printing Office.
    U.S. Department of Housing and Urban Development, Office of Community Planning and Development. (1989). An analysis of the income cities earn from UDAG projects. Office of Program Analysis and Evaluation report. Washington, DC: U.S. Department of Housing and Urban Development.
    Vaill, P.B. (1996). Learning as a way of being: Strategies for survival in a world of permanent white water. San Francisco: Jossey-Bass.
    van de Vail, M., & Bolas, C.A. (1981). External vs. internal social policy researchers. Knowledge: Creation, Diffusion, Utilization, 2461–481.
    Wargo, M.J. (1995). The impact of federal government reinvention on federal evaluation activity. Evaluation Practice, 16 (3), 227–237. http://dx.doi.org/10.1016/0886-1633%2895%2990036-5
    Weiss, C.H. (1975). Evaluation research in the political context. In E.L.Struening & M.Guttentag (Eds.), Handbook of evaluation research (Vol. 1). Beverly Hills, CA: Sage.
    Weiss, C.H. (1977). Research for policy's sake: The enlightenment function of social research. Policy Analysis, 3 (4), 530–545.
    Weiss, C.H. (1988). If program decisions hinged only on information: A response to Patton. Evaluation Practice, 9 (3), 15–28. http://dx.doi.org/10.1016/S0886-1633%2888%2980042-4
    Wheatley, M.J. (1992). Leadership and the new science: Learning about organization from an orderly universe. San Francisco: Berrett-Koehler.
    Wholey, J.S. (1979). Evaluation: Promise and performance. Washington, DC: Urban Institute.
    Wholey, J.S. (1983). Evaluation and effective public management. Boston: Little, Brown.
    Wholey, J.S. (1986, October). Options, not recommendations. Paper presented at the meeting of the American Evaluation Association, Kansas City, KS.
    Wholey, J.S. (1989). Introduction: How evaluation can improve agency and program performance. In J.S.Wholey, K.E.Newcomer, & Associates (Eds.), Improving government performance: Evaluation strategies for strengthening public agencies and programs. San Francisco: Jossey-Bass.
    Wholey, J.S. (1994). Assessing the feasibility and likely usefulness of evaluation. In J.S.Wholey, H.P.Hatry, & K.E.Newcomer (Eds.), Handbook of practical program evaluation. San Francisco: Jossey-Bass.
    Wholey, J.S. (1997). Trends in performance measurement: Challenges for evaluators. In E.Chelimsky & W.R.Shadish (Eds.), Evaluation for the 21st century. Thousand Oaks, CA: Sage.
    Wholey, J.S., & Newcomer, K.E. (1997). Clarifying goals, reporting results. In K.E.Newcomer (Ed.), Using performance measurement to improve public and nonprofit programs (New Directions for Evaluation, No. 75, pp. 91–98). San Francisco: Jossey-Bass.
    Wildavsky, A. (1979). Speaking truth to power: The art and craft of policy analysis. Boston: Little, Brown.
    Winberg, A. (1991). Maximizing the contribution of internal evaluation units. Evaluation and Program Planning, 14167–172. http://dx.doi.org/10.1016/0149-7189%2891%2990052-I
    Windle, C., & Neigher, W. (1978). Ethical problems in program evaluation: Advice for trapped evaluators. Evaluation and Program Planning, 197–108. http://dx.doi.org/10.1016/0149-7189%2878%2990024-1
    Worthen, B.R. (1995). Some observations about the institutionalization of evaluation. Evaluation Practice, 1629–36. http://dx.doi.org/10.1016/0886-1633%2895%2990004-7
    Wrong, D.H. (1979). Power: Its forms, bases, and uses. Key Concepts in the Social Sciences. New York: Harper & Row.
    Wye, C. (1989). Stimulating change in agency management. In J.S.Wholey, K.E.Newcomer, & Associates (Eds.), Improving government performance: Evaluation strategies for strengthening public agencies and programs (pp. 179–194). San Francisco: Jossey-Bass.
    Yin, R. (1984). Case study research: Design and methods. Beverly Hills, CA: Sage.
    Zapico-Goni, E., & Mayne, J. (1997). Performance monitoring: Implications for the future. In J.Mayne & E.Zapico-Goni (Eds.), Monitoring performance in the public sector. New Brunswick, NJ: Transaction Publishing.

    About the Author

    Richard C. Sonnichsen retired from the Federal Bureau of Investigation in 1994 after 30 years of service as a special agent investigator and senior executive. For the last 12 years, he was the deputy assistant director in charge of the Office of Planning, Evaluation, and Audits, where he managed the FBI's strategic planning efforts, financial audits, and investigative and administrative program evaluations. He currently works as an evaluation and management consultant. He is a member of the American Evaluation Association, the American Society of Public Administration, and the International Working Group on Evaluation (INTEVAL) and has taught evaluation and research at the University of Southern California Washington Public Affairs Center in Washington, D.C. as an adjunct faculty member. In 1996, he received the Alva and Gunnar Myrdal Award for Government Service from the American Evaluation Association “in recognition of his career contributions toward making internal evaluation both valued and useful.” In addition to High Impact Internal Evaluation, he has written numerous articles on internal evaluation and chapters in eight books, and he has coedited two books, Can Governments Learn? Comparative Perspectives on Evaluation & Organizational Learning and Evaluation in the Federal Government: Changes, Trends, and Opportunities. He has served on the editorial boards of the New Directions for Program Evaluation series, Evaluation Practice, and Evaluation and Program Planning. Dr. Sonnichsen has spoken and presented papers at evaluation conferences in the United States, Canada, and Europe. He received his undergraduate degree in forestry from the University of Idaho and his master's and doctorate degrees in public administration from the University of Southern California. He resides with his wife, Sally, in Sandpoint, Idaho.


    • Loading...
Back to Top

Copy and paste the following HTML into your website