Performance Measurement Essay Assignment Help
Order ID 53563633773 Type Essay Writer Level Masters Style APA Sources/References 4 Perfect Number of Pages to Order 5-10 Pages Description/Paper Instructions
Performance Measurement Essay Assignment Help
Assignment 1: Performance Measurement Essay
This module taught you that performance measurement has become a very important aspect of the program evaluation process as the stakeholders in the agency or program will want to see evidence that the outcomes are being met and, if not, what specifically needs to be addressed or fixed in order to meet those outcomes.
Tasks:
In about 300 words, respond to the following:
- How does performance measurement differ from program evaluation?
- What do you think are the key benefits of using performance measurement as part of the program evaluation process?
- If you were working for an agency that is in its upcoming year and that only has the money and manpower to develop and carry out either program evaluation capabilities (putting its resources into developing the capability to conduct performance evaluations) or performance measurement capabilities (putting its resources into measuring the key outputs and outcomes for its programs), which one of these would you do first? Why?
Discussion board 2 with references
This module taught you that ethical practice during the evaluation process is vital to the reliability of the evaluation that is produced. As in most areas of the helping professions, the ethical principles are not absolute, leading to an evaluator’s careful consideration of the actions he or she takes and the strategies he or she employs.
Tasks:
Using the Argosy University online library resources and the Internet, research and read about the assignment topic. In a minimum of 200 words, respond to the following:
- Analyze Fiona’s casefrom your textbook. What are the ethical issues that you believe she is facing?
- What are the “benefits and costs” from Fiona’s perspectives?
- What are the “benefits and costs” from the agencies’ perspectives?
- If you were Fiona, what would be your decision (conduct the evaluation in-house or contract out)? What is your rationale for this decision?
Here is the Fiona’s case down below
Fiona’s Choice: An Ethical Dilemma for a Program Evaluator
Fiona Barnes did not feel well as the deputy commissioner’s office door closed behind her. She walked back to her office wondering why bad news seems to come on Friday afternoons. Sitting at her desk, she went over the events of the last several days, and the decision that lay ahead of her. This was clearly the most difficult situation she had encountered since her promotion to the position of Director of Evaluation in the Department of Human Services.
Fiona’s predicament had begun the day before, when the new commissioner, Fran Atkin, had called a meeting with Fiona and the deputy commissioner. The governor was in a difficult position: in his recent election campaign, he had made potentially conflicting campaign promises. He had promised to reduce taxes and had also promised to maintain existing health and social programs, while balancing the state budget.
The week before, a loud and lengthy meeting of the commissioners in the state government had resulted in a course of action intended to resolve the issue of conflicting election promises. Fran Atkin had been persuaded by the governor that she should meet with the senior staff in her department, and after the meeting, a major evaluation of the department’s programs would be announced. The evaluation would provide the governor with some post-election breathing space. But the evaluation results were predetermined—they would be used to justify program cuts. In sum, “a compassionate” but substantial reduction in the department’s social programs would be made to ensure the department’s contribution to a balanced budget.
As the new commissioner, Fran Atkin relied on her deputy commissioner, Elinor Ames. Elinor had been one of several deputies to continue on under the new administration and had been heavily committed to developing and implementing key programs in the department, under the previous administration. Her success in doing that had been a principal reason why she had been promoted to deputy commissioner.
On Wednesday, the day before the meeting with Fiona, Fran Atkin had met with Elinor Ames to explain the decision reached by the Governor, downplaying the contentiousness of the discussion. Fran had acknowledged some discomfort with her position, but she believed her department now had a mandate. Proceeding with it was in the public’s interest.
Elinor was upset with the governor’s decision. She had fought hard over the years to build the programs in question. Now she was being told to dismantle her legacy—programs she believed in that made up a considerable part of her budget and person-year allocations.
In her meeting with Fiona on Friday afternoon, Elinor had filled Fiona in on the political rationale for the decision to cut human service programs. She also made clear what Fiona had suspected when they had met with the commissioner earlier that week—the outcomes of the evaluation were predetermined: they would show that key programs where substantial resources were tied up were not effective and would be used to justify cuts to the department’s programs.
Fiona was upset with the commissioner’s intended use of her branch. Elinor, watching Fiona’s reactions closely, had expressed some regret over the situation. After some hesitation, she suggested that she and Fiona could work on the evaluation together, “to ensure that it meets our needs and is done according to our standards.” After pausing once more, Elinor added, “Of course, Fiona, if you do not feel that the branch has the capabilities needed to undertake this project, we can contract it out. I know some good people in this area.”
Fiona was shown to the door and asked to think about it over the weekend.
Fiona Barnes took pride in her growing reputation as a competent and serious director of a good evaluation shop. Her people did good work that was viewed as being honest, and they prided themselves on being able to handle any work that came their way. Elinor Ames had appointed Fiona to the job, and now this.
(McDavid 395-396)
McDavid, James C. Program Evaluation and Performance Measurement: An Introduction to Practice. SAGE Publications, Inc, 08/2005. VitalBook file.
The citation provided is a guideline. Please check each citation for accuracy before use.
Writing assignment 1 along with references
Now that you have chosen your agency or program, familiarized yourself with its purpose, and finished your needs assessment, in preparation for your final program evaluation this module, you will create a logic model. This model will help you visualize the key inputs, components, objectives, outputs, constructs, and outcomes for your agency and program as well as the outcomes that you will discuss in detail in M5 Assignment 2 (RA 2). Once you complete your logic model, you will also consider the role of performance measurement in your proposed program evaluation.
See examples of logic models in your textbook; these would be good examples to use as you develop your own logic model.
In addition to submitting your logic model, write 3–4 paragraphs, addressing the following:
· 1. Provide a description of your logic model. Make sure that your explanation is concise and clearly explains the logic of the connections you have made in your model.
·
· 2. Will performance measurement be a part of your program evaluation? Why or why not?
· 3. If so, how will you measure performance within the agency or program that you are evaluating?
Here are some examples of some logic models down below
There are alternative ways of graphically representing programs as open systems. Collectively, these representations are often referred to as program logic models . They have in common the fact that all are intended as ways of categorizing program activities and program outcomes. They differ in the ways categories are labeled, and in the level of detail in the modeling process itself, specifically, the extent to which the logic models are also intended to be causal models that make explicit connections between activities, outputs, and outcomes.
In this chapter, two program logic modelling approaches are introduced and illustrated. We begin with a basic example of a program that has some of the essential features of logic models. The example of the Laurel House program will illustrate how logic models can be used to categorize program processes and intended results. Then we present a more general logic modeling framework that builds on the features of our basic example, and we define its main features. After we describe the framework, we illustrate it with two examples of logic models based on actual programs. All logic models have in common the function of communicating to stakeholders the key parts of programs and the intended relationships among them ( Coffman, 1999 ). In addition, logic models play an important role in performance management. They can be used as a part of the strategic planning process to clarify intended objectives and the program designs that are intended to achieve outcomes ( Coffman, 1999 ). In a review article that compares international applications of logic modeling, Montague (2000) points out that in Canada, Australia, and the United States, logic modeling has become central to performance measurement and reporting systems. Governments that have embraced performance measurement systems generally also invest in logic modeling as a way of presenting their programs and linking them to intended results. In Canada, for example, the federal agency that supports government-wide evaluation activities offers a preparation guide for federal department performance reports which includes a logic model template. The report states:
In discussing strategic outcomes, departments need to better demonstrate how their program results and resources contribute to those outcomes. To facilitate the provision of this information, we are asking departments to use the [logic model] template provided below. ( Treasury Board of Canada Secretariat, 2003 , p. 25)
Likewise, in the European Union, logic modeling has been advocated as an important part of sound program evaluation practice ( Nagarajan & Vanheukelen, 1997 ).
- A BASIC LOGIC MODELING APPROACH
Public sector organizations and nonprofit agencies are being encouraged or even exhorted to develop performance measurement systems. A key step in the development of performance measures is to describe the programs in an organization in a way that facilitates developing measures of program activities and outcomes. Program logic models are widely used for this purpose.
One illustration of such a model is the program logic for Laurel House, a nonprofit social service agency in a small west coast city in Canada. The description of the program in Laurel House is as follows:
Laurel House provides social, recreational, and pre-vocational activities and skills development for over 200 adults with serious mental illness. Some individuals have attended for years, and for them Laurel House is their main support in continuing community living; others may attend for a much shorter period, as needed, before moving on to more mainstream “activities.” About 60 people attend each day, many of whom attend 2-3 times a week and some every day. Laurel House itself is a grand old building ideally located one mile from downtown and one mile from the hospital, with five bus routes less than a block away. The House activities are also carried out in the ground floor of an office building next door, which shares with the House a quiet shady garden. Some activities take place in the general community with Laurel House staff and members participating together ( Dinsdale, Cutt, & Murray, 1998 , p. 106).
The logic model for Laurel House is shown in Table 2.1 . Its major features include categories for inputs, activities, outputs, and three kinds of outcomes: initial, intermediate, and long term. We will briefly discuss what these categories mean for the logic model of Laurel House and more systematically define the categories when we introduce our general framework for constructing logic models.
The activities for Laurel House are stated in general terms, and the model is intended to be a way of translating a verbal description of the program into a model that succinctly depicts the program. The outputs indicate the work done and are the immediate results of activities. Outcomes are intended to follow from the outputs and represent the objectives of the program. Logic models generally are displayed so that a time-related sequence is implied in the model. Often, logic models are displayed so that resources occur first, then activities, then outputs. The sequence of outcomes is intended to indicate intended causality. Intermediate outcomes follow from the initial outcomes, and the long-term outcomes are intended as results of intermediate outcomes. Distinguishing between short, intermediate, and long-term outcomes recognizes that not all effects of program activities are discernable immediately on completion of the program. For example, welfare recipients who participate in a program designed to make them long-term members of the workforce may not find employment immediately upon finishing the program. The program, however, may have increased their self-confidence and job-hunting, interviewing, and resume writing skills. These short-term outcomes may, within a year of program completion, lead to the long-term outcome of many participants finding full-time employment. Such situations remind us that some program outcomes need to be measured at one or more follow-up points in time, perhaps 6 months or a year (or more, depending on the intended logic) after program completion ( Martin & Kettner, 1996 ).
Table 2.1 Program Logic Model of Laurel House
Outcomes Inputs Activities Outputs Initial Intermediate Long-term · •Funding: $10 lunch charge Capital Health Region and United Way funding · •179 staff hrs/wk
· •25 volunteer hrs/wk
· •Various self or group determined activities (in and out of the house) · •Learning about their illness
· •Links to needed services and supports
· •Goal setting
· •Activities to enhance peer support networks
· •Prevocational skills sessions
· •Number of clients served · •Number of life skill (e.g., cooking and medicine) related sessions
· •Number of referrals to supports
· •Duration and frequency of attendance
· •Number of meals
Members indicate an increased mastery of life They gain a greater understanding and awareness of their illness and how to cope with it. For example:
· •Improved life skills (cooking and hygiene)
· •Are more aware of available services and supports
Members gain a greater appreciation for the importance of developing a wider social network
Members develop stronger social networks and friendships, and employ coping and life skills Members indicate an increased quality of life A logic model like the one in Table 2.1 might be used to develop performance measures—the words and phrases in the model could become the basis for more clearly defined program constructs , which, in turn, could be used to develop measures. But the model has a limitation. It does not specify how its various activities are linked to specific outputs, or how particular outputs are connected to initial outcomes. In other words, the model offers us a way to categorize and describe program processes and outcomes, but it is of limited use as a causal model of the intended program structure.
LOGIC MODELS THAT CATEGORIZE AND SPECIFY INTENDED CAUSAL LINKAGES •
Table 2.2 presents a framework for modeling program logics that builds upon the basic logic modeling approach we have introduced so far. We will discuss the framework and then introduce several examples of logic models that have been constructed using the framework. The framework in Table 2.2 does two things. First, it classifies the main parts of a typical logic model into inputs, components, implementation objectives, linking constructs, and outcomes. We will define each of these, and provide examples where it is appropriate.
Program inputs are the resources that are required to operate the program—they typically include money, people, equipment, facilities, and knowledge. Program inputs are an important part of logic models. It is possible to monetize all inputs, that is, convert them to equivalent dollar values. Evaluations that compare program costs to outputs (technical efficiency) or program costs to outcomes (cost-effectiveness) or compare costs to monetized value of outcomes (cost-benefit analysis) all require estimates of inputs expressed in dollars. Increasingly, performance measurement systems that are being implemented by governments are connecting program costs to results. Examples of this trend are the PART process in the Office of Management and Budget in the U.S. Federal Government ( U.S. General Accounting Office, 2004 ) and the commitment of the Texas State Government to include performance measurement results with program budgetary requests ( Tucker, 2002 ).
Program components are clusters of activities in the program. They can be administrative units within an organization that is delivering a program. For example, a job training program with three components (intake, skills development, and job placement) might be organized so that there are work groups for each of these three components. Alternatively, it might be that one work group does these three clusters of activities—smaller organizations in the nonprofit sector typically expect employees to become generalists.
Implementation objectives are included for each component of a program logic model. This innovation in modeling program logics was introduced by Rush and Ogborne (1991) and is intended to focus attention on the activities in the program that are required to produce program outputs. Implementation objectives are about getting the program running, that is, getting things done in the program itself that are necessary in order to have an opportunity to achieve the intended outcomes. Implementation objectives simply state the work that the program managers and participants need to do, without anticipating intended outcomes. Typical ways of stating implementation objectives begin with words like: “to provide,” “to give,” “to do,” or “to make.” An example of an implementation objective for the intake component of a job training program might be: “To assess the training needs of clients.” Another example from the skills development component of the same program might be: “To provide work skills training for clients.”
(McDavid 43-48)
McDavid, James C. Program Evaluation and Performance Measurement: An Introduction to Practice. SAGE Publications, Inc, 08/2005. VitalBook file.
The citation provided is a guideline. Please check each citation for accuracy before use.
RUBRIC
QUALITY OF RESPONSE NO RESPONSE POOR / UNSATISFACTORY SATISFACTORY GOOD EXCELLENT Content (worth a maximum of 50% of the total points) Zero points: Student failed to submit the final paper. 20 points out of 50: The essay illustrates poor understanding of the relevant material by failing to address or incorrectly addressing the relevant content; failing to identify or inaccurately explaining/defining key concepts/ideas; ignoring or incorrectly explaining key points/claims and the reasoning behind them; and/or incorrectly or inappropriately using terminology; and elements of the response are lacking. 30 points out of 50: The essay illustrates a rudimentary understanding of the relevant material by mentioning but not full explaining the relevant content; identifying some of the key concepts/ideas though failing to fully or accurately explain many of them; using terminology, though sometimes inaccurately or inappropriately; and/or incorporating some key claims/points but failing to explain the reasoning behind them or doing so inaccurately. Elements of the required response may also be lacking. 40 points out of 50: The essay illustrates solid understanding of the relevant material by correctly addressing most of the relevant content; identifying and explaining most of the key concepts/ideas; using correct terminology; explaining the reasoning behind most of the key points/claims; and/or where necessary or useful, substantiating some points with accurate examples. The answer is complete. 50 points: The essay illustrates exemplary understanding of the relevant material by thoroughly and correctly addressing the relevant content; identifying and explaining all of the key concepts/ideas; using correct terminology explaining the reasoning behind key points/claims and substantiating, as necessary/useful, points with several accurate and illuminating examples. No aspects of the required answer are missing. Use of Sources (worth a maximum of 20% of the total points). Zero points: Student failed to include citations and/or references. Or the student failed to submit a final paper. 5 out 20 points: Sources are seldom cited to support statements and/or format of citations are not recognizable as APA 6th Edition format. There are major errors in the formation of the references and citations. And/or there is a major reliance on highly questionable. The Student fails to provide an adequate synthesis of research collected for the paper. 10 out 20 points: References to scholarly sources are occasionally given; many statements seem unsubstantiated. Frequent errors in APA 6th Edition format, leaving the reader confused about the source of the information. There are significant errors of the formation in the references and citations. And/or there is a significant use of highly questionable sources. 15 out 20 points: Credible Scholarly sources are used effectively support claims and are, for the most part, clear and fairly represented. APA 6th Edition is used with only a few minor errors. There are minor errors in reference and/or citations. And/or there is some use of questionable sources. 20 points: Credible scholarly sources are used to give compelling evidence to support claims and are clearly and fairly represented. APA 6th Edition format is used accurately and consistently. The student uses above the maximum required references in the development of the assignment. Grammar (worth maximum of 20% of total points) Zero points: Student failed to submit the final paper. 5 points out of 20: The paper does not communicate ideas/points clearly due to inappropriate use of terminology and vague language; thoughts and sentences are disjointed or incomprehensible; organization lacking; and/or numerous grammatical, spelling/punctuation errors 10 points out 20: The paper is often unclear and difficult to follow due to some inappropriate terminology and/or vague language; ideas may be fragmented, wandering and/or repetitive; poor organization; and/or some grammatical, spelling, punctuation errors 15 points out of 20: The paper is mostly clear as a result of appropriate use of terminology and minimal vagueness; no tangents and no repetition; fairly good organization; almost perfect grammar, spelling, punctuation, and word usage. 20 points: The paper is clear, concise, and a pleasure to read as a result of appropriate and precise use of terminology; total coherence of thoughts and presentation and logical organization; and the essay is error free. Structure of the Paper (worth 10% of total points) Zero points: Student failed to submit the final paper. 3 points out of 10: Student needs to develop better formatting skills. The paper omits significant structural elements required for and APA 6th edition paper. Formatting of the paper has major flaws. The paper does not conform to APA 6th edition requirements whatsoever. 5 points out of 10: Appearance of final paper demonstrates the student’s limited ability to format the paper. There are significant errors in formatting and/or the total omission of major components of an APA 6th edition paper. They can include the omission of the cover page, abstract, and page numbers. Additionally the page has major formatting issues with spacing or paragraph formation. Font size might not conform to size requirements. The student also significantly writes too large or too short of and paper 7 points out of 10: Research paper presents an above-average use of formatting skills. The paper has slight errors within the paper. This can include small errors or omissions with the cover page, abstract, page number, and headers. There could be also slight formatting issues with the document spacing or the font Additionally the paper might slightly exceed or undershoot the specific number of required written pages for the assignment. 10 points: Student provides a high-caliber, formatted paper. This includes an APA 6th edition cover page, abstract, page number, headers and is double spaced in 12’ Times Roman Font. Additionally, the paper conforms to the specific number of required written pages and neither goes over or under the specified length of the paper. GET THIS PROJECT NOW BY CLICKING ON THIS LINK TO PLACE THE ORDER
CLICK ON THE LINK HERE: https://www.perfectacademic.com/orders/ordernow
Also, you can place the order at www.collegepaper.us/orders/ordernow / www.phdwriters.us/orders/ordernow
Do You Have Any Other Essay/Assignment/Class Project/Homework Related to this? Click Here Now [CLICK ME]and Have It Done by Our PhD Qualified Writers!!