Monetary Relationship Between Evaluators Assignment
Order ID 53563633773 Type Essay Writer Level Masters Style APA Sources/References 4 Perfect Number of Pages to Order 5-10 Pages Description/Paper Instructions
Monetary Relationship Between Evaluators Assignment
- Assess the likelihood that evaluation results might be used. (Shadish, 1991, p.198)
- Ask questions that can “provide an intellectual setting of concepts, propositions, orientations, and empirical generalizations” for policy making. (Weiss, 1978, as cited in Shadish 1991, p.202)
- Use well designed qualitative and quantitative methods to conduct evaluation study with emphasis not only on the immediate outcome of a program, but also on the inputs, implementation and long-term outcome of the program. (Shadish, 1991, p.205)
International Education Studies www.ccsenet.org/ies
- Draw policy implications from evaluation research by compiling separate summaries to multiple stakeholders with knowledge and information that best interest them. Make recommendations for future programs from the data of evaluation results. (Shadish, 1991, p.205-206) Strength and Weakness of Weiss’ Position: Weiss further differentiates the role of evaluator from the role of a researcher by addressing the complex political context that besets social programs. She warns evaluators against political naivety and urges them to do evaluation that can be used in policy-making, in the form of “enlightenment” rather than “instrumental use”. The educator role she assigns to evaluators reflects her pragmatic view of evaluation and suggests a new mode that evaluation can be used.
However, the role of evaluator proposed by Weiss has some intrinsic flaws. First, such role fails to consider of the variety of different contexts (ironically). For instance, the decision for the state or federal government to hire an educator is often not for the purpose of “being educated”, but to get concrete data regarding the program effect. The proposal to conduct “scientifically based evaluation” made by the Department of Education is a good example of that. As a result, an evaluator who uses case studies to describe the program input, implementation and long-term effect might not by appreciated by policy-makers in this context. Secondly, her emphasis on providing information to policy-makers poses the danger of evaluators becoming the servant of that particular stakeholder group. What should be the role of an evaluator when the interests of different stakeholder groups conflict with each other and speaking for the underrepresented group might limit the use of evaluation results in the policy-making process? 2.5 Rossi Rossi didn’t give an explicit definition about the role of evaluators. Rather, the roles an evaluator shall play might vary according to different stages of evaluation. For example, in the program conceptualization stage, an evaluator sometimes takes the role of a social scientist, incorporating social science theories into the development of an intervention model. (Shadish, 1991, p.389-391) In the stage of program implementation, an evaluator works as a program administrator, making sure the program is implemented as expected so as to “rule out faulty implementation as a culprit in poor program outcome”. (Shadish, 1991, p.381) Besides, the operational data collected this way can also be useful for the future dissemination of the program.
When determining the program utility, an evaluator will take the roles of a methodologist and a project manager, who selects and applies appropriate research methods to assess the impact of program intervention as well as conducts efficiency analysis about the program such as cost-benefit and cost-effectiveness analysis. (Rossi & Freeman, 1985, p.327-328) The different types of social programs will also affect the roles an evaluator plays during the evaluation. Rossi has categorized social programs into three types: innovative, established, and fine-tuning programs. For instance, when evaluating innovative programs, much emphasis is given to the conceptualization of the program. (Shadish, 1991, p. 404) An evaluator’s responsibility will include setting program objectives and constructing an impact model between program objectives and activities, which should be based on not only the stakeholder’s views, but also the results of needs assessment and social science theories.
However, conceptualization is rarely the focus of evaluation of established programs since their conceptual frameworks already exist and are less likely to change. (Shadish, 1991, p. 404) Instead, an evaluator will take a more summative approach and much of his/her responsibility falls into judging the program accountability. The role of an evaluator is less summative in fine-tuning programs, with emphasis on identifying the needs for change and formative modifications. Rossi’s attempt to integrate the works of various theorists into one theoretical framework also helps to shape his position on the issue of the proper role for an evaluator. Rossi appreciates the strengths of different roles proposed by other theorists and assign such roles to the contexts that best fit them.
The “good enough rule” of doing evaluation proposed by Rossi also frees evaluators from the everlasting debate of internal validity vs. external validity, quantitative methods vs. qualitative methods, descriptive value vs. prescriptive value. Instead, it allows evaluators to choose the best possible design through assessing all kinds of “trade-offs”. Strength and Weakness of Rossi’s position: In my opinion, Rossi’s stance on the role of evaluator is closest to the reality, since evaluation by nature is highly context-based. The nature of different social programs, clients’ expectations, evaluators’ backgrounds and expertise, the employment status, available resources, and restraints, as well as the influence of culture and politics can all result in quite different approaches towards doing an evaluation. Even for the same evaluation, activities in different stages will require various competencies from an evaluator. As a result, there shouldn’t be only one proper role for an evaluator and Rossi’s attempt to integrate different roles into one theoretical framework is plausible. However, Rossi is less explicit when linking a specific context to a specific role and linking the specific role to a set of responsibilities that relates to such role.
International Education Studies Vol. 3, No. 2; May 2010
47
2.6 A Comparative Analysis of Different Theorists’ Positions Looking beyond the different metaphors used by theorists to express their opinions on the roles of evaluator, a comparative analysis will be done in this section to dissect such roles into specific behaviors that an evaluator should do in different phases of an evaluation. Those phases are listed in the table below as: program selection, criteria selection, data collecting, evaluation findings, evaluation use and dissemination. (See table 1) As we can see from the table, different roles an evaluator takes can result in quite different approaches of doing evaluation in certain phases while still share a lot of similarity in the other phases. All the citations in the following table are from the book Foundations of Program Evaluation. (Shadish et al, 1991) Insert Table 1 Here 3. Proposed Resolution of the Fundamental Issue The current debate among evaluation theorists regarding the proper roles for an evaluator reflects their different stances on other fundamental issues such as the value of evaluation (descriptive vs. prescriptive), the methods of evaluation (quantitative vs. qualitative), the use of evaluation (instrumental vs. enlightenment), the purpose of evaluation (summative vs. formative). My own resolution on the issue of evaluator’s role will also be based on my understanding of those fundamental issues.
Value: an evaluator should prioritize the values from different stakeholder groups when selecting the criteria of merit for evaluands. Not only does the descriptive values approach reflect the concept of a plural democracy, it also orients the evaluation questions towards the concerns of its stakeholders thus makes it more likely that the evaluation finings will be used by them. By taking into account different opinions regarding the values of the program, an evaluator can also reduce the personal bias, which is hard to avoid when the evaluator is the one who assigns values. However, in case different stakeholders can’t reach an agreement on the issue of value, an evaluator should take measures to make his/her judgment about the program value. For example, the evaluator can conduct a needs-assessment to identify the primary stakeholder group who will be affected most by the program and prioritize their values when selecting the criteria of merit for the evaluation.
Methods: an evaluator should be familiar with quantitative and qualitative methods and accept them both as available methods for conducting evaluation. However, an evaluator should meet with clients before evaluation and get their opinions on the suggested method. If the clients have a strong opinion to get some quantitative data and have some nice charts in the final report, then the case study is not an ideal method. If the clients want the program to suffer from minimum intrusion from the evaluation, then experiment or quasi-experiment design shouldn’t be considered as first options. However, this is not to say that evaluators should discard the methods they consider best for the evaluation immediately if such methods are not accepted by clients; but it is essential for evaluators to convince their clients of the proposed methods for evaluation and reach an agreement before applying any method.
Use: an evaluator should emphasize the instrumental use of his/her evaluation findings and actively promote the dissemination of the evaluation results. It is hard to imagine that an evaluation will be initiated with no intent to know the effect of a social program, especially when such program costs a lot of tax-payers’ money and impacts a large population. In my opinion, ignoring the instrumental use of evaluation is highly irresponsible and has detrimental impact on the profession of evaluation: why should I hire someone to evaluate a program if he/she cannot tell me if the program is working or not? The enlightenment use of evaluation sounds promising but has its limitation in actual practice. First, it is hard to determine the scope of data collecting. With potential users of evaluation findings in obscurity, it is hard to know what specific data will be useful for them. Second, it is hard to enlighten people without telling them what works in the program and what does not. We have to make a judgment about success and failure if we want others to learn from the success or the failure. Last but not least, clients’ expectation and time restraint often make conducting evaluation for enlightenment impossible. It takes time to measure the program input, implementation, outcome, long-term impacts peoples’ attitude, etc. But most clients do not have so much time for an evaluation. Purpose: I prefer a more summative role for an evaluator for the following reasons.
First of all, an evaluator is hired for his/her expertise in making rational judgments. If clients want only the vicarious experiences or a set of quantitative data, they can just hire some writers or statisticians to do the job. Second, even for the purpose to make a program better there is a difference between an evaluator and a program manager. At some point, an evaluator will give a summative opinion about what works and what does not in the program. An evaluator can make recommendation about how to make improvement; but ultimately it is up to the program manager to make that decision based on his/her knowledge about the program such as the available resources or people’s commitment. Lastly, being too involved in improving a program might hinder the objective and rational
International Education Studies www.ccsenet.org/ies
48
judgment from evaluators. A lot of personal bias is expected if the evaluator of a program becomes its advocate; since it is hard for an evaluator to make a statement that a program is not working and should be terminated if the evaluator believes in such program and has invested heavily in it. However, by reviewing other theorists’ position on the issue of evaluator’s roles, we can see each different role will have its unique strength and weakness and there is no specific role that fits all the evaluation. Instead of using another metaphor to describe my position, I will define the role of evaluator in the things an evaluator will do during an evaluation, as shown in the following table. (See table 2) Insert Table 2 Here Nevertheless, I understand my resolution for such fundamental issue is not perfect and is vulnerable to criticism in several aspects.
First of all, the role I proposed for evaluators might not work in certain contexts; or it is very likely that evaluators who take different roles can still achieve great successes in conducting evaluations. Take Stake, Weiss, Wholey and Cronbach for example, their difference regarding the proper role for an evaluator doesn’t prevent them from presenting strong evaluation cases to support their positions. In other words, it is almost like I propose a role for an evaluator and then caution evaluators to take such role while in the same time indicate other roles might work even better in their contexts. Second, different cultural, racial and ethnic backgrounds of evaluators could also affect their judgments about issues such as clients’ expectations, dynamics between the program administrators and evaluators, the monetary relationship between evaluators, program administrators, federal government and stakeholder groups, as those issues might vary significantly in different background contexts. Since my resolution is based on my judgment of such issues, if my judgment is an incorrect reflection of the reality elsewhere, then my resolution might not fit well in that context. Last but not least, people can also attack the logic behind my approach to define the role for an evaluator. Maybe it is wrong to define the evaluator’s role in terms of what they should do during the evaluation; maybe an evaluator’s role should be associated with the purpose of an evaluation and such role actually defines what an evaluator should do in the evaluation. References Campbell, D. T. (1984). Can we be scientific in applied social science. Evaluation studies review annual, Volume 9. Beverly Hills, CA: Sage Publications. Scriven, M. (1969). An introduction to meta-evaluation. Education Product Report, 2, 36-38. Scriven, M. (1986). New frontiers of evaluation. Evaluation Practice, 7, 7-44 Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage. Shadish, W. R. Jr., Cook, T. D. & Leviton, L. C. (1991). Foundations of program evaluation. Newbury Park, CA: Sage. Smith, N. L. & Brandon, P. R. (Eds.). (2008). Fundamental issues in evaluation. New York, NY: Guilford. Stake, R.E. (1980). Program Evaluation, Particularly Responsive Evaluation. Rethinking Educational Research (pp.72-87). London: Hodder & Stoughton. Stake, R.E. & Trumbull, D. J. (1982). Naturalistic generalizations. Review Journal of Philosophy & Social Science, 7, 1-12. Rossi, P. H., and Freeman, H. E. (1985). Evaluation: A Systematic Approach. (3rd ed.) Beverly Hills, Calif.: Sage, 1985. Weiss, C. H. (1978). Improving the linkage between social research and public policy. In Lynn, L.E. (Ed.). Knowledge and policy: The uncertain connection (pp. 23-81). Washington, DC: National Academy of Sciences. Weiss, C.H. (1988). “If Program Decisions Hinged Only on Information: A Response to Patton.” Evaluation Practice 9(3): 15-28. Notes Note 1. Readers gain new perspectives and ideas from reading the evaluation reports, which results in accumulated knowledge and evidence that can be used by them in many ways including defining a problem, suggesting a solution and making a policy. Differing from the instrumental use of evaluation results, enlightenment emphasizes the long term effect on policy-making.
International Education Studies Vol. 3, No. 2; May 2010
RUBRIC
QUALITY OF RESPONSE NO RESPONSE POOR / UNSATISFACTORY SATISFACTORY GOOD EXCELLENT Content (worth a maximum of 50% of the total points) Zero points: Student failed to submit the final paper. 20 points out of 50: The essay illustrates poor understanding of the relevant material by failing to address or incorrectly addressing the relevant content; failing to identify or inaccurately explaining/defining key concepts/ideas; ignoring or incorrectly explaining key points/claims and the reasoning behind them; and/or incorrectly or inappropriately using terminology; and elements of the response are lacking. 30 points out of 50: The essay illustrates a rudimentary understanding of the relevant material by mentioning but not full explaining the relevant content; identifying some of the key concepts/ideas though failing to fully or accurately explain many of them; using terminology, though sometimes inaccurately or inappropriately; and/or incorporating some key claims/points but failing to explain the reasoning behind them or doing so inaccurately. Elements of the required response may also be lacking. 40 points out of 50: The essay illustrates solid understanding of the relevant material by correctly addressing most of the relevant content; identifying and explaining most of the key concepts/ideas; using correct terminology; explaining the reasoning behind most of the key points/claims; and/or where necessary or useful, substantiating some points with accurate examples. The answer is complete. 50 points: The essay illustrates exemplary understanding of the relevant material by thoroughly and correctly addressing the relevant content; identifying and explaining all of the key concepts/ideas; using correct terminology explaining the reasoning behind key points/claims and substantiating, as necessary/useful, points with several accurate and illuminating examples. No aspects of the required answer are missing. Use of Sources (worth a maximum of 20% of the total points). Zero points: Student failed to include citations and/or references. Or the student failed to submit a final paper. 5 out 20 points: Sources are seldom cited to support statements and/or format of citations are not recognizable as APA 6th Edition format. There are major errors in the formation of the references and citations. And/or there is a major reliance on highly questionable. The Student fails to provide an adequate synthesis of research collected for the paper. 10 out 20 points: References to scholarly sources are occasionally given; many statements seem unsubstantiated. Frequent errors in APA 6th Edition format, leaving the reader confused about the source of the information. There are significant errors of the formation in the references and citations. And/or there is a significant use of highly questionable sources. 15 out 20 points: Credible Scholarly sources are used effectively support claims and are, for the most part, clear and fairly represented. APA 6th Edition is used with only a few minor errors. There are minor errors in reference and/or citations. And/or there is some use of questionable sources. 20 points: Credible scholarly sources are used to give compelling evidence to support claims and are clearly and fairly represented. APA 6th Edition format is used accurately and consistently. The student uses above the maximum required references in the development of the assignment. Grammar (worth maximum of 20% of total points) Zero points: Student failed to submit the final paper. 5 points out of 20: The paper does not communicate ideas/points clearly due to inappropriate use of terminology and vague language; thoughts and sentences are disjointed or incomprehensible; organization lacking; and/or numerous grammatical, spelling/punctuation errors 10 points out 20: The paper is often unclear and difficult to follow due to some inappropriate terminology and/or vague language; ideas may be fragmented, wandering and/or repetitive; poor organization; and/or some grammatical, spelling, punctuation errors 15 points out of 20: The paper is mostly clear as a result of appropriate use of terminology and minimal vagueness; no tangents and no repetition; fairly good organization; almost perfect grammar, spelling, punctuation, and word usage. 20 points: The paper is clear, concise, and a pleasure to read as a result of appropriate and precise use of terminology; total coherence of thoughts and presentation and logical organization; and the essay is error free. Structure of the Paper (worth 10% of total points) Zero points: Student failed to submit the final paper. 3 points out of 10: Student needs to develop better formatting skills. The paper omits significant structural elements required for and APA 6th edition paper. Formatting of the paper has major flaws. The paper does not conform to APA 6th edition requirements whatsoever. 5 points out of 10: Appearance of final paper demonstrates the student’s limited ability to format the paper. There are significant errors in formatting and/or the total omission of major components of an APA 6th edition paper. They can include the omission of the cover page, abstract, and page numbers. Additionally the page has major formatting issues with spacing or paragraph formation. Font size might not conform to size requirements. The student also significantly writes too large or too short of and paper 7 points out of 10: Research paper presents an above-average use of formatting skills. The paper has slight errors within the paper. This can include small errors or omissions with the cover page, abstract, page number, and headers. There could be also slight formatting issues with the document spacing or the font Additionally the paper might slightly exceed or undershoot the specific number of required written pages for the assignment. 10 points: Student provides a high-caliber, formatted paper. This includes an APA 6th edition cover page, abstract, page number, headers and is double spaced in 12’ Times Roman Font. Additionally, the paper conforms to the specific number of required written pages and neither goes over or under the specified length of the paper. GET THIS PROJECT NOW BY CLICKING ON THIS LINK TO PLACE THE ORDER
CLICK ON THE LINK HERE: https://www.perfectacademic.com/orders/ordernow
Also, you can place the order at www.collegepaper.us/orders/ordernow / www.phdwriters.us/orders/ordernow
Do You Have Any Other Essay/Assignment/Class Project/Homework Related to this? Click Here Now [CLICK ME]and Have It Done by Our PhD Qualified Writers!!