Curriculum constrction sem i evaluation models

Click here to load reader

download Curriculum constrction sem i   evaluation models

of 36

Transcript of Curriculum constrction sem i evaluation models

CURRICULUM CONSTRUCTION Seminar I

R. Raj Kumar,Master of Education,Department of Education,Periyar University,Salem - 636 011,2013-14.CURRICULUM CONSTRUCTION

Seminar IUse of Evaluation Models

What is evaluation?Evaluation describes how to assess the nature, impact and value of an activity through the systematic collection, analysis and interpretation of information with a view to making an informed decision.Evaluation involves 3 activities: Outlining clear purposes Gathering evidences JudgmentEvaluation is part of development rather than apart from it.

CURRICULUM EVALUATIONIntra-curricular evaluationTeacher evaluation of studentsStudent evaluation of teachersMaterials evaluationVerification of methodsEvaluation of tests and examinationsChecking the learning outcomes while on the fieldCurriculum review/ improvement/ change/ modificationSystem revision

Curriculum EvaluationCurriculum evaluation broadly conceived, is a stock-taking process.A curriculum may be structured in so many ways. For instance,-it may be based on an assembly of courses that are deemed necessary to meet certain job requirements;- it can be formed from the basics of a particular discipline in a faculty or department;- it can be designed to meet the needs of a professional or technical programme,- or it can be developed based on a systematic specification of outcomes.It must therefore be periodically evaluated.

APPROACHES TO CURRICULUM EVALUATIONGoal-basedDetermining whether pre-stated goals of educational or training programs were met.Goal-freeUncovering and documenting what outcomes were occurring in educational or training programs without regard to whether they were intended program goals focus.Responsive (contingency-unforeseen event)Comparing what was intended for instruction to what actually was observed.

These approaches are based on the classical curriculum evaluation models as presented by Stufflebeam and Shinkfield (1990)The decision-makingThe collecting information about educational or training programs for the purpose of decision-making.The accreditationIt is for forming professional judgments about the processes used within education or training programs.

World views about EvaluationMelrose (1996) grouped existing models into three paradigms or world views about evaluation, these are:I. The functional modelii. The transactional model; andiii. The critical paradigms of evaluation.Need a model for curriculum evaluation:To provide a conceptual framework for designing a particular evaluation depending on the specific purpose of the evaluation.

Models of Curriculum Evaluation Concept of a model Theory: Explains a process (Why?) Model Describes a process (How?) Model is a representation of reality presented with a degree of structure and order.

Classification of ModelsModelsWorking ModelsStatic ModelSmall ScaleLarge Scale

Types of ModelsModelsDescribes:

What is meant by the concept?Describes:

How to perform a task?Describes:

The relationship b/n the various elements of a situation or process

Models of Curriculum EvaluationAlades Six ModelsOlaitans Four ModelsTylers Model CIPP Model Stakes Model

Rogers Model Scrivens ModelStenhouses Research ModelStufflebeam CIPP ModelKrikpatricks Model

1. ALADES SIX MODELSFrom another perspective, Lawton (1980) cited in Alade (2006) classified models of curriculum evaluation into six, viz:1. The Classical Model,2. Research and Development Model,3. Illumination Model,4. Briefing Decision-Makers Model,5. Teacher as Research (Professional) Model,6. Case Study Model.

2. OLAITANS FOUR MODELSIn respect of vocational-technical education evaluation, Olaitan (1996)identified the following evaluation models which had been employed by a good number of researchers.They include: the Illumination Model, the Goal-Free Model, the Context, (C) Input (I), Process (P), Product (P) (CIPP) Model, and The Transactional Model.They had been found reliable as a guide for collecting evaluative data in curriculum evaluation.

3. TYLERS MODEL (1949)Key Emphasis: Instructional Objective Purpose: To measure students progress towards objectives Method :1. Specify Instructional Objectives 2. Collect performance Data 3. Compare performance data with the objectives/standards specified Limitations:Ignores process Not useful for diagnosis of reasons why a curriculum has failed

Tylers Planning Model

ObjectivesSelecting learning experiencesOrganizing learning experiencesEvaluation of Students PerformanceWhat educational goals should the school seek to attain?How can learning experiences be selected which are likely to be useful in attaining these objectives? How can learning experiences be organized for effective instruction? How can the effectiveness of learning experiences be evaluated?

4. CIPP MODEL (1971)The CIPP model of evaluation concentrates on: Context of the programmeInput into the programme Process within the programme Product of the programme

Context EvaluationObjective: To define the context ,Identify population,Assess needs,Diagnose problem.Method: By comparing the actual and the intended inputs and outputs Relation to decision making: For deciding upon settings to be served For changes needed in planning Needs of Industry, SocietyFuture technological developmentsMobility of the students.

Input EvaluationObjective:Identify and assess system capabilitiesAlternative strategiesImplementation designMethod: Analyzing resources, solution strategies, procedural designs for relevance, feasibility and economy.Relation to Decision Making:Selecting sourcesStructuring activitiesBasis for judging implementation

Process evaluationObjectives: To identify process defects in the procedural design or its implementationMethod: monitoring, describing process, interacting, observingRelation to Decision making: For implanting and refining the programme design and procedure for effective process control.

Product evaluationObjectives: To relate outcome information to objectives and to context input and process information Method: Measurement Vs Standards interpreting the outcome Relation to Decision making: For deciding to continue, terminate, modify, build or refocus a change of activity.

Limitations of CIIP Model 1.Over values efficiency 2.Undervalues students aims CIPP approach recommends Multiple observers and informants Mining existing information Multiple procedures for gathering data; cross-check qualitative and quantitative Independent review by stakeholders and outside groups

5. STAKEs MODEL (1969)Antecedent is any condition existing prior to teaching and learning which may relate to outcome.

Transactions are the countless encounters of students with teacher, student with student, author with reader, parent with counselor

Outcome include measurements of the impact of instruction on learners and others

ANTECEDENTS - Conditions Existing prior to Curriculum Evaluation Students interests or prior learning Learning Environment in the Institution Traditions and Values of the Institution TRANSACTIONS Interactions that occur between: TEACHERS STUDENTS STUDENTS STUDENTS STUDENTS CURRICULAR MATERIALS STUDENTS EDUCATIONAL ENVIRONMENT TRANSACTIONS = PROCESS OF EDUCATION

OUTCOMES Learning outcomes Impact of curriculum implementation on Students Teachers Administrators Community Immediate outcomes Vs Long range outcomes Three sets of Data 1.Antecedents-Conditions existing before implementation 2.Transactions - Activities occurring during implementation 3.Outcomes Results after implementation Describe the program fully Judge the outcomes against external standards

STAKEs MODELKey Emphasis: Description and judgment of Data Purpose: To report the ways different people see curriculum Focus is on Responsive Evaluation 1.Responds to audience needs for information 2.Orients more toward program activities than results 3. Presents all audience view points(multi perspective) Limitations: 1.Stirs up value Conflicts 2.Ignores causes

6. KAUFMAN ROGERS MODELNeed Assessment Where are we now? Where are we to be? ..Discrepancy..Discrepancy between current status and Desired status Discrepancies should be identified in terms of products of actual behaviors (Ends) Not in terms of processes (Means) Deduction:-The drawing of a particular truth from a general, antecedently known Rule examples Induction:- Rising from particular truths to a generalization Examples - rules

7. SCRIVENS GOAL FREE EVALUATION (1973)Proponent : Michael ScrivenIntroduced the term formative and summativeBroaden perspective of evaluationEvaluator should not know the educational programs goals in order not to be influenced by themEvaluator therefore totally independentEvaluator free to look at processes and procedures, outcomes and unanticipated effects

Roles of curriculum evaluation: Scriven differentiates between two major roles of curriculum evaluation: the formative and the summative. Formative evaluation during the development of the programme Summative evaluation at its conclusionMethodology:The field is open to the hunter but he did have a lethal checklist of criteria for judging any aspect of the curriculum

8. STENHOUSES RESEARCH MODEL (1970S)Evaluation as part of curriculum developmentContinuous cycle of formative evaluation and curriculum improvement at school levelRelationship between curriculum developer and evaluator is centralCurriculum developer offer solutionsEvaluator is the practical man who temper enthusiasm with judgmentThe developer is the investigator; teacherAutonomous professional self-development through self studyStudy of others and testing ideas

9. STUFFLEBEAM CIPP MODELCIPP model of curriculum development is a process of developing the curriculum.CIPP model of curriculum evaluation is the process to see the effectiveness of the developed and implemented curriculum.These approaches are based on the classical curriculum evaluation models as presented by Stufflebeam and Shinkfield (1990)The decision-making:- The collecting information about educational or training programs for the purpose of decision-making.The accreditation:- It is for forming professional judgments about the processes used within education or training programs.

ContextPlanning decisionsWhat needs are to be addressedDefining objectives for the programInputStructuring decisionsWhat resources are availableWhat alternative strategies should be consideredWhat plan has the best potential

ProcessImplementing decisionsHow well is the plan being implementedWhat are the barriersWhat revision are neededProductRecycling decisionsWhat result are obtainedWere need reducedWhat should be done with the program

10. Kirkpatrick's Four Levels of EvaluationIn Kirkpatrick's four-level model, each successive evaluation level is built on information provided by the lower level.

ASSESSING TRAINING EFFECTIVENESS often entails using the four-level model developed by Donald Kirkpatrick (1994).

Level 1 - Reaction Evaluation at this level measures how participants in a training program react to it. This type of evaluation is often called a smile sheet.

Level 2 - Learning Assessing at this level moves the evaluation beyond learner satisfaction and attempts to assess the extent students have advanced in skills, knowledge, or attitude.

Level 3 Evaluation - Transfer This level measures the transfer that has occurred in learners' behavior due to the training program. Are the newly acquired skills, knowledge, or attitude being used in the everyday environment of the learner? Level 4 Evaluation- Results This level measures the success of the program in terms that managers and executives can understand -increased production, improved quality, decreased costs, reduced frequency of accidents, increased sales, and even higher profits or return on investment.

Thank You