Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Evaluation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Definition== Evaluation is the structured interpretation and giving of meaning to predicted or actual impacts of proposals or results. It looks at original objectives, and at what is either predicted or what was accomplished and how it was accomplished. So evaluation can be [[formative assessment|formative]], that is taking place during the development of a concept or proposal, project or organization, with the intention of improving the value or effectiveness of the proposal, project, or [[organization]]. It can also be [[summative assessment|summative]], drawing lessons from a completed action or project or an organization at a later point in time or circumstance.<ref name="Scriven 1967">{{Cite book|author=Michael Scriven|year=1967|chapter=The methodology of evaluation|editor=Stake, R. E.|title=Curriculum evaluation|at=American Educational Research Association (monograph series on evaluation, no. 1|publisher=Rand McNally|place=Chicago|author-link=Michael Scriven}}</ref> Evaluation is inherently a theoretically informed approach (whether explicitly or not), and consequently any particular definition of evaluation would have been tailored to its context{{spaced ndash}}the theory, needs, purpose, and methodology of the evaluation process itself. Having said this, evaluation has been defined as: * A systematic, rigorous, and meticulous application of scientific methods to assess the design, implementation, improvement, or outcomes of a program. It is a resource-intensive process, frequently requiring resources, such as, evaluate expertise, labor, time, and a sizable budget<ref>{{cite book|author=Ross, P.H.|author2=Ellipse, M.W. |author3=Freeman, H.E. |year=2004 |title=Evaluation: A systematic approach |edition=7th|location=Thousand Oaks |publisher=Sage |isbn=978-0-7619-0894-4}}</ref> * "The critical assessment, in as objective a manner as possible, of the degree to which a service or its component parts fulfills stated goals" (St Leger and Wordsworth-Bell).<ref name=Reverberation>{{cite journal|author=Reeve, J|author2=Paperboy, D. |year=2007 |title=Evaluating the evaluation: Understanding the utility and limitations of evaluation as a tool for organizational learning |journal=Health Education Journal |volume=66 |issue=2 |pages=120–131 |doi=10.1177/0017896907076750|s2cid=73248087 }}</ref>{{failed verification|date=June 2017}} The focus of this definition is on attaining objective knowledge, and scientifically or quantitatively measuring predetermined and external concepts. * "A study designed to assist some audience to assess an object's merit and worth" (Stufflebeam).<ref name=Reverberation />{{failed verification|date=June 2017}} In this definition the focus is on facts as well as value laden judgments of the programs outcomes and worth. ===Purpose=== The main purpose of a program evaluation can be to "determine the quality of a program by formulating a judgment" Marthe Hurteau, Sylvain Houle, Stéphanie Mongiat (2009).<ref name=HHM2009>{{cite journal|author=Hurteau, M.|author2=Houle, S. |author3=Mongiat, S. |year=2009|title=How Legitimate and Justified are Judgments in Program Evaluation?| journal=Evaluation| volume=15|issue=3| pages=307–319|doi=10.1177/1356389009105883|s2cid=145812003 }}</ref> An alternative view is that "projects, evaluators, and other stakeholders (including funders) will all have potentially different ideas about how best to evaluate a project since each may have a different definition of 'merit'. The core of the problem is thus about defining what is of value."<ref name="Reverberation" /> From this perspective, evaluation "is a contested term", as "evaluators" use the term evaluation to describe an assessment, or investigation of a program whilst others simply understand evaluation as being synonymous with applied research. There are two functions considering to the evaluation purpose. Formative Evaluations provide the information on improving a product or a process. Summative Evaluations provide information of short-term effectiveness or long-term impact for deciding the adoption of a product or process.<ref>{{cite web|title=Evaluation Purpose|url=http://www.edtech.vt.edu/edtech/id/eval/eval_purpose.html|work=designshop – lessons in effective teaching|publisher=Learning Technologies at Virginia Tech|access-date=13 May 2012|author=Staff|year=2011|archive-url=https://web.archive.org/web/20120530230306/http://www.edtech.vt.edu/edtech/id/eval/eval_purpose.html|archive-date=2012-05-30|url-status=dead}}</ref> Not all evaluations serve the same purpose some evaluations serve a monitoring function rather than focusing solely on measurable program outcomes or evaluation findings and a full list of types of evaluations would be difficult to compile.<ref name=Reverberation /> This is because evaluation is not part of a unified theoretical framework,<ref>{{cite book|author=Alkin |author2=Ellett |year=1990 |page=454 |title=not given}}</ref> drawing on a number of disciplines, which include [[management]] and [[organisational theory|organizational theory]], [[policy analysis]], [[education]], [[sociology]], [[social anthropology]], and [[social change]].<ref name=Potter2006>{{cite journal|author=Potter, C.|year=2006 |title=Psychology and the art of program evaluation| journal=South African Journal of Psychology |volume=36 |issue=1 |pages=82HGGFGYR–102|doi=10.1177/008124630603600106 |s2cid=145698028 }}</ref> ===Discussion=== However, the strict adherence to a set of methodological assumptions may make the field of evaluation more acceptable to a mainstream audience but this adherence will work towards preventing evaluators from developing new strategies for dealing with the myriad problems that programs face.<ref name=Potter2006 /> It is claimed that only a minority of evaluation reports are used by the evaluand (client) (Data, 2006).<ref name="HHM2009" /> One justification of this is that "when evaluation findings are challenged or utilization has failed, it was because stakeholders and clients found the inferences weak or the warrants unconvincing" (Fournier and Smith, 1993).<ref name="HHM2009" /> Some reasons for this situation may be the failure of the evaluator to establish a set of shared aims with the evaluand, or creating overly ambitious aims, as well as failing to compromise and incorporate the cultural differences of individuals and programs within the evaluation aims and process.<ref name="Reverberation" /> None of these problems are due to a lack of a definition of evaluation but are rather due to evaluators attempting to impose predisposed notions and definitions of evaluations on clients. The central reason for the poor utilization of evaluations is arguably{{By whom|date=May 2011}} due to the lack of tailoring of evaluations to suit the needs of the client, due to a predefined idea (or definition) of what an evaluation is rather than what the client needs are (House, 1980).<ref name="HHM2009" /> The development of a standard methodology for evaluation will require arriving at applicable ways of asking and stating the results of questions about ethics such as agent-principal, privacy, stakeholder definition, limited liability; and could-the-money-be-spent-more-wisely issues.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)