Order ID | 53563633773 |
Type | Essay |
Writer Level | Masters |
Style | APA |
Sources/References | 4 |
Perfect Number of Pages to Order | 5-10 Pages |
Scientific in Applied Social Science
judgment from evaluators. A lot of personal bias is expected if the evaluator of a program becomes its advocate; since it is hard for an evaluator to make a statement that a program is not working and should be terminated if the evaluator believes in such program and has invested heavily in it. However, by reviewing other theorists’ position on the issue of evaluator’s roles, we can see each different role will have its unique strength and weakness and there is no specific role that fits all the evaluation. Instead of using another metaphor to describe my position, I will define the role of evaluator in the things an evaluator will do during an evaluation, as shown in the following table. (See table 2) Insert Table 2 Here Nevertheless, I understand my resolution for such fundamental issue is not perfect and is vulnerable to criticism in several aspects. First of all, the role I proposed for evaluators might not work in certain contexts; or it is very likely that evaluators who take different roles can still achieve great successes in conducting evaluations.
Take Stake, Weiss, Wholey and Cronbach for example, their difference regarding the proper role for an evaluator doesn’t prevent them from presenting strong evaluation cases to support their positions. In other words, it is almost like I propose a role for an evaluator and then caution evaluators to take such role while in the same time indicate other roles might work even better in their contexts. Second, different cultural, racial and ethnic backgrounds of evaluators could also affect their judgments about issues such as clients’ expectations, dynamics between the program administrators and evaluators, the monetary relationship between evaluators, program administrators, federal government and stakeholder groups, as those issues might vary significantly in different background contexts. Since my resolution is based on my judgment of such issues, if my judgment is an incorrect reflection of the reality elsewhere, then my resolution might not fit well in that context. Last but not least, people can also attack the logic behind my approach to define the role for an evaluator.
Maybe it is wrong to define the evaluator’s role in terms of what they should do during the evaluation; maybe an evaluator’s role should be associated with the purpose of an evaluation and such role actually defines what an evaluator should do in the evaluation. References Campbell, D. T. (1984). Can we be scientific in applied social science? Evaluation studies review annual, Volume 9. Beverly Hills, CA: Sage Publications. Scriven, M. (1969). An introduction to meta-evaluation. Education Product Report, 2, 36-38. Scriven, M. (1986). New frontiers of evaluation. Evaluation Practice, 7, 7-44 Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage. Shadish, W. R. Jr., Cook, T. D. & Leviton, L. C. (1991). Foundations of program evaluation. Newbury Park, CA: Sage. Smith, N. L. & Brandon, P. R. (Eds.). (2008).
Fundamental issues in evaluation. New York, NY: Guilford. Stake, R.E. (1980). Program Evaluation, Particularly Responsive Evaluation. Rethinking Educational Research (pp.72-87). London: Hodder & Stoughton. Stake, R.E. & Trumbull, D. J. (1982). Naturalistic generalizations. Review Journal of Philosophy & Social Science, 7, 1-12. Rossi, P. H., and Freeman, H. E. (1985). Evaluation: A Systematic Approach. (3rd ed.) Beverly Hills, Calif.: Sage, 1985. Weiss, C. H. (1978). Improving the linkage between social research and public policy. In Lynn, L.E. (Ed.). Knowledge and policy: The uncertain connection (pp. 23-81). Washington, DC: National Academy of Sciences. Weiss, C.H. (1988). “If Program Decisions Hinged Only on Information: A Response to Patton.” Evaluation Practice 9(3): 15-28. Notes Note 1. Readers gain new perspectives and ideas from reading the evaluation reports, which results in accumulated knowledge and evidence that can be used by them in many ways including defining a problem, suggesting a solution and making a policy. Differing from the instrumental use of evaluation results, enlightenment emphasizes the long term effect on policy-making.
International Education Studies Vol. 3, No. 2; May 2010
49
Table 1. A summary of theorists’ positions on the issue of evaluator’s role
Scriven Campbell Stake Weiss Rossi Program selection
“everything” (p.84)
Preference for pilot programs (p.136)
Preference for local programs
Programs whose evaluation results are likely to be used
Divide programs into 3 kinds: innovative, established and fine-tuning
Criteria selection
Prescribe value through needs- assessment
Describe values from political process (p.160)
Describe values from local stakeholders (p.307)
Describe values from different stakeholders (p.210)
Prefer criteria to come from stakeholder agreement but unclear how to avoid biases
Data collecting scope
The outcome effects of the program
Outcome consequence of the program (p.161)
13 kinds of data covering the program antecedents, transaction and the outcome (p.282)
Not only the immediate outcome of the program, but also its input, implementation and long-term outcome (p.205)
Collecting data regarding the program implementation, outcome as well as its efficiency
Data collecting method
Not specific Randomized- experiments or strong quasi-experim ents
Qualitative methods, case studies
Both quantitative and qualitative methods (p.204-205)
Accept both quantitative and qualitative methods with a preference towards the former.
Evaluation findings
One final evaluative judgment regarding the program effects and “need”
Reports about the causal inference of program effect
Accurate portrayal of the program, diverse formats (p.282-283)
Separate summaries for different stakeholder groups, with knowledge and information that best interest them.
Include not only the program effects, but also its policy relevancy such as cost-efficiency, alternative options, etc. (p.409)
Use of evaluation results
Instrumental use of the results to choose the better evaluand or improve one. (p.109)
Instrumental use of the results to solve a social problem.
Provide vicarious experiences of the program
Enlightenment use of the results to accumulate knowledge and shape policy-making
Instrumental, conceptual and persuasive use of the results. (p.410-411)
Dissemination of the evaluation results
Not mentioned Dissemination is not the concern of an evaluator (p.162)
Not mentioned Actively facilitate the dissemination of results (p.207)
“ a definite responsibility of evaluation researchers” (p.410)
International Education Studies www.ccsenet.org/ies
50
Table 2. A summary of my position on the issue of evaluator’s role
Phase of Evaluation Evaluator’s responsibility Program selection
Criteria selection
Data collecting scope
Data collecting methods
Evaluation findings
Use of evaluation results
Dissemination of evaluation results
|
||||||||||||||||||||||||||||||||
GET THIS PROJECT NOW BY CLICKING ON THIS LINK TO PLACE THE ORDERCLICK ON THE LINK HERE: https://www.perfectacademic.com/orders/ordernowAlso, you can place the order at www.collegepaper.us/orders/ordernow / www.phdwriters.us/orders/ordernow |
||||||||||||||||||||||||||||||||
|