Skip to main content icon/video/no-internet

Politics Of Evaluation

Like any organized social practice, evaluation is unavoidably implicated in matters of politics. The ways in which politics and evaluation are interrelated are many. Consider, for example, the following sampling of topics addressed in the literature: evaluation as a political ritual, political participation in evaluation, the politics of identifying stakeholders, evaluation as a political activity in a political context, evaluation in the service of political rationality, the political morality of evaluation practice, the politics of applying methods in evaluation, the political accountability of evaluation. A not uncommon view is that even though evaluation results are surely implicated in the arena of politics, political concerns inevitably represent an intrusion into the conduct of evaluation. The inevitability of this state of affairs is owing to the facts that (a) programs are created and maintained by political forces; (b) higher echelons of government, which make decisions about programs, are embedded in politics; (c) the very act of evaluation has political connotations. The milieu and discourse of politics—conceived in terms of norms, values, ideology, power, influence, authority, and so forth—are often contrasted with the world of science, which is conceived in terms of facts, objectivity, and empirically warranted descriptions and explanations. The world of politics and values is thought to lie outside of the scientific practice of evaluation, the so danger is that the former will impede, constrain, unduly influence, or otherwise obstruct evaluation. Taking this into consideration, evaluators ought to be aware of political rationality, recognize that the findings of their evaluations might well be used for political purposes, and take steps to minimize the contamination of scientific rationality by political influences. In a nutshell, this is the doctrine of value-free science as applied to evaluation.

A less common view is that politics is not simply bound to the uses of evaluation but always implicated in our understandings of what evaluation is as a practice and what constitutes evaluation knowledge. In other words, evaluation is as much a political as a scientific practice. In this way of thinking, the politics of knowing can be explored at both the micro and the macro levels of evaluation practice. The micro level is the level of everyday interaction involving, for example, negotiations between evaluator and sponsor, evaluator and client, and evaluator and stakeholders. The macro level refers to the behavior of evaluation as a whole—as a social practice or institution.

At the micro level, one might examine the political motivations of evaluators. For example, it is generally understood (although not necessarily widely accepted) that many evaluators prefer case study, participatory, empowerment, utilization-focused, and collaborative forms of evaluation because they want evaluation to be more directly helpful to those being studied. To promote stakeholder engagement, self-determination, and ownership of an evaluation process is to endorse a particular set of political values. Conversely, to practice evaluation in such a way that the evaluator intentionally focuses solely on producing an objective (i.e., interest neutral and value neutral) assessment of program performance and avoids any substantial involvement with participants in an evaluation (beyond using them as data sources or informants) is to promote a different political orientation to evaluation. At the micro level, the politics of negotiating evaluation contracts may also be considered, including access to and control of data, as well as the politics involved in the myriad types of interactions between evaluator, sponsor, client, and stakeholders. In a value-free ideology for evaluation, the point of such political negotiations is actually to minimize the intrusion of politics! The Joint Committee Program Evaluation Standards, for example, contend that for an evaluation to be politically viable, it must be planned and conducted in such a way that the cooperation of various interest groups is obtained and that any efforts by these groups to curtail or otherwise influence the conduct or conclusions of the evaluation are averted or counteracted. However, it may be overlooked here that a broad political orientation characterized by values of cooperation, accommodation, and “getting along” is the foreground of this way of thinking. One could equally well imagine a different political orientation of which the foreground is the political viability of evaluation in values of confrontation, ideology critique, and emancipation. An additional consideration of the micropolitics of evaluation focuses on the rhetorical construction and presentation of evaluation arguments. In Ernest House's famous argument, evaluation is fundamentally a matter of persuasion and argumentation, not demonstration or proof. The task of persuading an audience that such and such is the case inevitably takes up matters of rhetoric and the crafting of appeals to reason and evidence suited to various audiences. Thus the politics and poetics of evaluation reporting are intertwined. A final illustration of micropolitical considerations in evaluation has to do with the politics of methods choice and use. Two examples illustrate different considerations here. Consider first the politics involved in the construction of a procedure to generate data. For example, Michael Oliver, a long-time activist and scholar in disability research, argues that social researchers working for the British government promoted the view that disability is a problem “in the person” by asking survey questions such as, “Are your difficulties in understanding people mainly due to a hearing problem?” “Have you attended a special school because of a long-term health problem or disability?” Oliver maintains that if the researchers had taken the view that the problem of disability is “in society,” they might have asked instead, “Are your difficulties in understanding people mainly due to their inabilities to communicate with you?” “Have you attended a special school because of your education authority's policy of sending people with your health problem or disability to such places?” Second, consider that MacDonald saw the use of case study evaluation (i.e., the idea of studying a social program as itself a case) as a means of bringing within the boundaries of a case agencies and actors at all levels of the power structure. In this way, case study methodology served the goals of democratic evaluation, for it opened programs to greater public scrutiny and accountability and served as a counterforce to bureaucratic evaluations that served to maintain and extend managerial power.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading