Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The...

36
Evaluation of the pilot course in Wales Pontydysgu September 2017 Section 1: The Evaluation Framework 1. Evaluation rationale and theoretical perspective Evaluation as a formal activity that we would recognise, has existed for a surprisingly long time. - there is evidence of formal evaluation processes in the administration of China in the 7th century. However, evaluation has only become a recognised area of academic study since about the 1960’s. It is probably true to say that evaluation started as a field of practice and the theory was derived from it. As it evolved, so ideological disputes developed alongside disagreements on definitions, terminology, ethics and so on. FitzPatrick, Sanders and Worthen in 2004 identified nearly 60 different models in the 30 years between 1960 and 1990 alone. This proliferation of models was bewildering for the practitioner, especially as many of these models and the tools they generated had no obvious theoretical perspective. Why is this a problem? Why should practitioners need a theoretical framework? Simply, a ‘good’ theory will set out the assumptions that it is making and on which its logic is predicated. Different theories make different assumption and generate models that will be based on different pre-conceptions and definitions of evaluation, which in turn lead to very different practices. Deriving a taxonomy of evaluation approaches Many researchers have tried to make sense of this huge diversity of models and theories and to find some way of classifying them. However, there is no general agreement on this so there is now an additional problem of trying to classify the classification systems. What follows is only one framework for distinguishing between different theories of evaluation. It was devised by FitzPatrick, Sanders and Worthen (2004) and its usefulness is in its comprehensiveness and usability. Other classification systems of note include Shadish, Cook and Levitan’s Show Your Own Gold: Pontydysgu Evaluation Report 1

Transcript of Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The...

Page 1: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

Evaluation of the pilot course in Wales

Pontydysgu September 2017 Section 1: The Evaluation Framework

1. Evaluation rationale and theoretical perspective

Evaluation as a formal activity that we would recognise, has existed for a surprisingly long time. - there is evidence of formal evaluation processes in the administration of China in the 7th century. However, evaluation has only become a recognised area of academic study since about the 1960’s. It is probably true to say that evaluation started as a field of practice and the theory was derived from it. As it evolved, so ideological disputes developed alongside disagreements on definitions, terminology, ethics and so on. FitzPatrick, Sanders and Worthen in 2004 identified nearly 60 different models in the 30 years between 1960 and 1990 alone. This proliferation of models was bewildering for the practitioner, especially as many of these models and the tools they generated had no obvious theoretical perspective. Why is this a problem? Why should practitioners need a theoretical framework? Simply, a ‘good’ theory will set out the assumptions that it is making and on which its logic is predicated. Different theories make different assumption and generate models that will be based on different pre-conceptions and definitions of evaluation, which in turn lead to very different practices. Deriving a taxonomy of evaluation approachesMany researchers have tried to make sense of this huge diversity of models and theories and to find some way of classifying them. However, there is no general agreement on this so there is now an additional problem of trying to classify the classification systems. What follows is only one framework for distinguishing between different theories of evaluation. It was devised by FitzPatrick, Sanders and Worthen (2004) and its usefulness is in its comprehensiveness and usability. Other classification systems of note include Shadish, Cook and Levitan’s framework1. However, whilst this is academically robust, it is more difficult to adapt to practice.

Philosophical /ideological differencesApproaches to evaluation may differ fundamentally because their underpinning philosophy or ideological base is different. That is, different evaluation theories will be based on different assumptions about the way the world works and so the models and practices based on those

1 Shadish, William R., Cook,Thomas D., Levitan, Laura C.,(1991) ‘Foundations of Program Evaluation: Theories of Practice’, Sage Publications, Newbury Park, Calif. Cook, Thomas J (April 1992) Journal of Public Administration Research and Theory, Volume 2, Issue 2, 1 April 1992, Pages 212–214, https://doi.org/10.1093/oxfordjournals.jpart.a037124

Show Your Own Gold: Pontydysgu Evaluation Report 1

Page 2: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

theories will be different as well. By and large, they can be located on a continuum from objectivist to subjectivist. Objectivism in evaluation is used in an equivalent sense to the empirical tradition in scientific research (positivism) and focuses on data collection and analysis techniques that produce results that are reproducible and verifiable by other evaluators and to generate conclusions that are evidence based and which can be ‘scientifically’ justified.2 So the evaluation is ‘external’ to the evaluator who is simply someone technically competent and proficient in the application of procedures. Subjectivism is based on“...an appeal to experience rather than to scientific method. Knowledge [of the evaluator] is conceived as being largely tacit rather than scientific.”(House 1980 in FitzPatrick, Sanders and Worthen 2004) The validity of a subjectivist evaluation depends on the relevance of the evaluators’ background, their experience and expertise, the keenness of their perceptions and their insightfulness in generating interpretations and conclusions. Thus, the evaluation procedures are ‘internal’ to each evaluator and are not explicitly understood or reproducible by others. Until 20 years ago, objectivism in evaluation was a goal to be aspired to. However, the same criticisms levelled at the usefulness of positivism in the social sciences in general were also applied to objectivism in evaluation. Campbell (1984) summed it up “Twenty years ago positivism dominated the philosophy of science...Today the tide has completely turned among the theorists of science in philosophy, sociology, and elsewhere. Logical positivism is almost universally rejected.” This point of view has been upheld by many writers on evaluation and even if it is not universally subscribed to, probably represents a general trend. The major argument is that unlike traditional scientific research, evaluation has to deal with complex phenomena in real world settings, take into account a multiplicity of stakeholders, unstable and unpredictable systems and requires a high level of human interactivity. The other criticism is that objectivism depends for its validity on its ‘scientific’ methodology and is only credible if you happen to value that methodology. We would argue that objectivism conceals hidden values and biases of which many evaluators are unaware – even the choice of data collection techniques and instruments is not value-neutral but this is not recognised or else ignored by many so-called objective evaluations.

2 Clearly there are semantic and epistemological difference between objectivism, positivism and empiricism (Peikoff, Leonard. Objectivism: the philosophy of Ayn Rand. Penguin, 1993)but in much evaluation theory the terms are often used interchangeably.Show Your Own Gold: Pontydysgu Evaluation Report

2

Page 3: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

Despite the reaction of the theorists, however, the message does not seem to have filtered through to the client base and the majority of evaluation consumers, particularly in education (and the public sector in general), are still wedded to the idea of objective evaluation and ‘finding the facts’. The major criticism is that subjectivist evaluation often leads to contradictory conclusions that cannot be reconciled because the processes which led to the conclusions is idiosyncratic for each evaluator and so cannot be replicated. Differences based on defining value or worthWe can also distinguish between different theoretical approaches depending on how they define value and make judgements, rather than on their philosophical differences. This continuum extends from ‘utilitarian’ to ‘intuitionist-pluralist’. ‘Utilitarianism’ is a philosophy based on maximising benefits to society. Utilitarian approaches to evaluation are based on the premise that the best programmes are those that achieve the greatest good for the greatest number. The evaluator will try and assess overall impact in terms of total group gains by using average outcome scores against the criteria selected to determine worth. Again, governments and the public sector tend to be adherents of this type of evaluation as it lends itself to large-scale comparisons of programmes and mass aggregation of data. Managers and public programme administrators tend to be the main audiences. According to FitzPatrick et al, the intuitionist-pluralist approach is at the other end of the spectrum and is based on the premise that value depends on the impact of a programme on each individual and the ‘greatest good’ is that which maximises the benefits for all stakeholders. “This evaluation focus will be on the distribution of gains (for example by cultural or subcultural demographic groups such as ethnicity or gender or age) or distribution of benefit across stakeholders (e.g. learners, administrators, delivery agencies, funding bodies, the public). There can be no common index of “good” but rather a plurality of criteria and judges. The evaluator is no longer an impartial ‘averager’ but a portrayer of different values and needs. The merit or worth of any programme depends on the values and perspectives of whoever is judging it and all stakeholders are legitimate judges.” Methodological differencesAlthough there is a strong correlation between an evaluator’s ideological approach and the methodology and techniques they will use (because, of necessity, one drives the other), there are other major divides based on methodological differences that are not necessarily rooted in a particular philosophical approach. For example, many evaluators (both theoreticians and practitioners) and also many clients tend to view qualitative and quantitative approaches as different paradigms. We do not subscribe to this view, believing that this is not a fundamental divide but simply a way of describing evaluation approaches by types of data that are used.

Show Your Own Gold: Pontydysgu Evaluation Report

3

Page 4: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

3Nevertheless, we recognise this as an important distinction for others and one that impacts on the overall evaluation methodology and the tools used. Differences according to discipline or field of applicationEvaluation is a relatively young field and still draws heavily on methodologies adapted from anthropology, sociology, psychology, philosophy, economics and mathematics. One of the consequences is that evaluation approaches can be grouped around their parent discipline so we tend to find ‘mathematical approaches’ or ‘sociological approaches. More recently the search for new models has widened its net and evaluation theorists such as Smith (1981) are trawling newer disciplines such as investigative journalism, photography, storytelling, philosophical analysis, forensic pathology and literary criticism for new ideas. Evaluation theory has also developed in a social context and practitioners work in different cultures, different sectors, with different target groups and different audiences. Consequently, different approaches and models have tended to emerge based on these factors. For example, ‘education programme’ evaluation has developed along a different trajectory than, for example, evaluation in the health services. Whilst many writers would argue that this is not a true theoretical divide, ‘theory-in-practice’ is a powerful determinant of evaluation approach and also stakeholders perceptions and expectations of the evaluation process. Differences in practiceThe above distinctions are all based (loosely) on theoretical divisions. However, FitzPatrick et al also point out that differences in approach can be practice-driven. Firstly, evaluators disagree about whether evaluators should simply provide information so that decision makers can make the value judgements. Others, would say that the evaluator’s report to decision makers is incomplete if it does not contain a value judgement. Secondly, evaluators differ in their perception of their own role and their place in the evaluation process. Who has authority and responsibility for the evaluation and to whom should the evaluator be accountable and answerable? If one evaluator sees his role as a ‘critical friend’ and another as ‘inspector’ or ‘judge’, then this will obviously influence the way they conduct an evaluation and also the conclusions they draw.

Thirdly, evaluators will be limited by their prior experience both in evaluation and also by their own discipline or professional background. Evaluation skills and knowledge are cumulative. Previous exposure to frequently recurring problems will affect the way an evaluator works. On the one hand it will probably mean the evaluator is quicker to detect problems, to identify issues of concern and make more insightful judgements. On the other hand, it will also mean that the evaluator’s perceptions in a new situation are unlikely to be ‘neutral’. Fourthly, evaluators have different views about what skills and expertise they should possess. Evaluators are frequently chosen on the basis of their expertise or practitioner base in the field

3 It is frequently the case that numerical data is equated with ‘quantitative’ methods. However, this is not necessarily the case.Show Your Own Gold: Pontydysgu Evaluation Report

4

Page 5: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

being evaluated rather than on the basis of their skills and experience as an evaluator. This is gradually changing but as evaluation is becoming increasingly professionalised and recognised as a specialist area in its own right, so professional evaluators are becoming specialised within the area. Some evaluators would argue that specialist knowledge of the field being evaluated is a prerequisite for the credibility of the whole process of evaluation. Others claim that not only is this unnecessary but can, on occasions, be unhelpful. A classification systemThe above analysis is interesting and outlines the major theoretical divides in evaluation. However, it is less helpful in terms of systematically examining the variation between particular evaluation approaches because although those approaches could be positioned on each of the above ’dimensions’, their location would vary from one dimension to another. The next section tries to provide some answers. Many evaluation theorists have attempted to do this and a workable solution has been put forward by Fitzpatrick, Sanders and Worthen (1983). We are proposing to use their work – with some modifications - partly in the interests of consistency (having referenced them heavily so far) and partly because they set out very clearly the thinking and rationale underpinning their classification system. Taxonomically, it is a less than satisfactory categorisation as the approaches do not necessarily differ from one another along the same dimension. However, they are pragmatic as they conveniently represent the major clusters of models and approaches in use today. A Classification Schemata for Evaluation Approaches

FitzPatrick et al identify 5 major clusters of evaluation approaches● Objectives oriented approaches● Management oriented approaches● Consumer oriented approaches● Expertise oriented approaches● Participant oriented approaches

However, to this we propose to add Van der Knapp’s ‘learning oriented approach’

ObjectivistRationalist

< < > > Subjectivist

Utilitarian < < > > Intuitionist - pluralist

Objectives Management Consumer Expertise Participant Learning

Show Your Own Gold: Pontydysgu Evaluation Report

5

Page 6: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

oriented approaches

oriented approaches

oriented approaches

oriented approaches

oriented approaches

orientedapproaches

E.g Provus’s Discrepancy Model

E.g. Stufflebeam’s CIPP model

E.g. Scriven's ‘Key Evaluation Checklist

E.g. many professional and accreditation bodies

E.g Stakes Countenance Framework

E.g. Peter van der Knaap

These 6 categories fall more or less along a continuum from utilitarian to intuitionist-pluralist so there is some logical basis in addition to its convenience and accessibility. Objectives oriented evaluation approachesObjectives-oriented evaluation is based on the idea that the purposes, goals or targets of a project are determined at the start and the evaluation process should establish whether these have actually been achieved – and, if not, why not. It is very similar to another approach known as a ‘systems approach’ to evaluation and both are very popular with public sector agencies who are concerned with justifying expenditure and performance measurement. It is sometimes called ‘goal-driven’ evaluation, in contrast with other approaches, which are called ‘goal-free’. There are many examples of objectives oriented models; the earliest is probably Tyler’s and more recently, Provus’s Discrepancy Model. The disadvantages are that this sort of approach can miss important outcomes if they were not included in the original objectives nor does it challenge the value of the objectives themselves Management oriented evaluation approaches The management-orientated approach to evaluation is meant to serve decision makers. Its rationale is that evaluation information is an essential part of good decision making and that the evaluator can be most effective by focussing the evaluation products on the needs of managers, policymakers, administrators and practitioners. Developers of this approach have traditionally relied on a systems approach to evaluation in which decisions are made about inputs, processes and outputs based on logic models and cybernetic theory. However, more recent developments have highlighted different levels of decision and decision makers and have focussed on who will use the evaluation results, how they will use them and what aspect(s) of the system they are making decisions about. Not surprisingly, it is the model preferred by many managers and management committees but the downside is that the needs of other stakeholders are ignored. Stufflebeam’s CIPP model is one of the most popular in management-orientated evaluation. Consumer orientated approachesShow Your Own Gold: Pontydysgu Evaluation Report

6

Page 7: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

Consumer orientated approaches to evaluation adopt the perspective of the end-user of whatever service or product is being provided. For this reason they tend to be summative, rather than formative and are concerned primarily with product evaluation. Consumer-orientated evaluation relies heavily on criteria referenced evaluation techniques such as benchmarking or kite marking and is understandably popular with standards agencies and ‘watchdog’ organisations. Michael Scriven's ‘Key Evaluation Checklist’ is probably the best-known example. The major disadvantage of a consumer-orientated approach is that it is a ‘backward-mapping’ approach and does not help make predictions about future impacts. It also tends to play down the nature of human interaction with the products being evaluated. Expertise orientated approachesExpertise oriented evaluation is based on the notion of “connoisseurship” and criticism and relies on the subjective professional judgement and expert knowledge of the evaluator. This is the oldest form of evaluation and is still very popular despite its limitations. Expertise-oriented evaluation may be formal or informal, based on individual expertise or, more usually, on the collective expertise of a panel. The opinions of multiple experts is popularly believed to minimise bias, though this does not always follow. It relies far less on external tools and instruments than other forms of evaluation and more on the experience and wisdom of the evaluator. Many public systems are based on expertise oriented evaluation – for example the jury system, school inspection system, licensing agencies, review boards, the refereeing system for academic journals, national commissions and enquiries and so on. Many organisations expect this type of evaluation if they employ an external evaluator and the notion of evaluation by ‘peer review’ is still the dominant model in most professional associations. The disadvantages are obviously the high reliance on the assumed expertise of the evaluator and a lack of explicit and published standards. Also, the credibility of results is affected by the status of the evaluator but equally the credibility of the evaluator is often affected by the results. Learning-oriented evaluation approachesThis is a relatively new group of approaches and not one that was included in FitzPatrick et al’s classification. Nevertheless we have included it because it is an approach that we personally use more than any other. The operating principle is that the purpose of evaluation is to contribute to some form of collective or organisational learning. Different models within this overall approach are based on different theories and types of learning including ‘corrective’ or behavioural learning, cognitive learning and social learning. The outputs and processes of the evaluation form the inputs of the learning. Learning-oriented evaluation approaches are still not widespread but are beginning to

Show Your Own Gold: Pontydysgu Evaluation Report

7

Page 8: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

gather momentum in the social agency sector, in education establishments and in voluntary organisations. The pioneer of work in this field was Peter Van der Knaap. More recently we have extended the approach to include evaluation as a contributor to knowledge creation in an organisation. Van der Knaap identifies 3 models of evaluation. The following useful summary, adapted from van der Knaap, (Hughes and Attwell 2002) illustrates how the learning processes may become dysfunctional.

System learning a) no feedback information

b) inferior feedback information

c) no correction on the basis of feedback

d) feedback in wrong direction (tunnel vision)

Cognitive learning a) no perception of information stimuli

b) no (adequate) interpretation of stimuli

c) no (experience of) cognitive dissonance

d) no (or insufficient) reflection

Social Learning a) no communication

b) no or incomplete comprehension

c) no connection amongst communication

d) no confrontation or contest of innovative viewpoints

Knowledge Management a) incomplete data sources or lack of recognition of availability of range of data sourcesb) failure to combine knowledge or inappropriate synthesis or lack of space to share knowledgec) failure to recognise relevance of alternative knowledge in novel contexts or inappropriate analysisd) inappropriate or inadequate knowledge storage and retrieval systems

If, as we suggested earlier, evaluation is inextricably bound up with facilitating the learning process, then a key task of any evaluative process is to minimise the problems which cause the dysfunctional learning situations described above. Each of these learning style models can be mapped to a corresponding evaluative focus.

Show Your Own Gold: Pontydysgu Evaluation Report

8

Page 9: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

Evaluation focus

Giving feedback and information

Providing stimuli or stimulus configuration

Being a faction within a communicative domain

Handling and information management

Learning style Predominantly corrective

Predominantly cognitive

Predominantly social

Predominantly knowledge based

Each evaluative focus is associated with different definitions of evaluations, different working styles, different methodologies and different normative frames. These are mapped on the grid below.

Learning Style Corrective Cognitive Social Knowledge based

Evaluation focus Giving feedback and information

Providing stimuli or external stimulus configurations

Influencing the social arena and argumentative domain

Creating space for knowledge exchange

Definition Evaluation as error detection

Evaluation as creating cognitive dissonance and promoting reflection

Evaluation as specialist area of dialectic communication

Evaluation as aid to knowledge management

Evaluation style Analytic Interpretative Facilitative Synthetic

Evaluation products

Providing relevant intelligence

Providing critique and commentary

Providing another source of information

Providing ways of handling information

Methodology Unbiased instruments

Combines methodological robustness with informed judgement

Relies on communication skills of evaluator

Instruments based on post normal science

Orientation Outcome oriented Process orientated Relationship orientated

Community orientated

Criteria Neutral criteria Implicit value judgements made explicit

Both evaluators and policy makers may change positions and criteria

Criteria are constantly shifting and recognised as shared value judgements

Show Your Own Gold: Pontydysgu Evaluation Report

9

Page 10: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

Normative frame Evaluator does not appraise policy goals

Evaluator can challenge policy norms and assumptions

Evaluator and client have mutually challenging and persuading positions

Evaluator can challenge policy norms and assumptions

Key question How successful was the implementation of policy?

What have we learned as a result?

Has the quality of the policy discourse improved?

What new knowledge has been created

Main tasks Identifying discrepancy between objectives and outcomes

Supporting, challenging, confirming or questioning validity claims in context of wider reality

Questioning the functionality of the communications arena

Challenging existing knowledge routes, encouraging new combinations of creations of knowledge

Objective versus subjective focus

Objectivity central Allows for subjectivity Evaluator wants to influence outcomes and is active participant in the communication game

Intersubjectiviy and shared meanings central

Centrality of evaluator role

Data focused – low profile role for evaluator

Evaluator focused – high profile role

Collective venture – role of evaluator reduced but radius of action increases

Low profile role

The main limitations of this approach is that it does not lend itself to “mass surveys” as it relies heavily on personal interaction between the evaluator and the project team and the evaluator’s understanding of the learning needs of the organisation. Also, within this overall approach there are very disparate models, some requiring a high level of commitment to the process, which may be lacking. Participant-oriented evaluation approachesAn increasingly popular approach that differs fundamentally from all the others as it takes the needs of project participants as its starting point. This is not to say that the other approaches ignore participant needs but that for the most part benefits for participants represent the endpoint of the evaluation and not the beginning. Participants are not simply the direct beneficiary target group of a project but will also include other stakeholders and potential beneficiaries. Thus an educational project for women returners

Show Your Own Gold: Pontydysgu Evaluation Report

10

Page 11: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

would include the learners themselves, the project staff, the management team and the funders but may also include the wider community, the learners families, the schools attended by the learners’ children, childcare agencies or whatever. Participant-oriented evaluation does not usually follow a formal plan drawn up in advance; rather it looks for patterns in the data as the evaluation progresses. Data is gathered in a variety of ways using a range of techniques and culled from many different sources. Understandings grow from observation and bottom up investigation rather than rational deductive processes. The evaluator’s role is to represent multiple realities and values rather than singular perspectives. Participant-oriented evaluation includes many sub-groups that share all or some of the above characteristics including Responsive Evaluation, Naturalistic Evaluation, Utilization Focussed evaluation and Empowerment evaluation. Of all the models, probably the best known and one of the most useful is Stake's Countenance Framework. Criticisms of this approach are many; bureaucrats tend to hate it because of its lack of ‘objectivity’ and because the outputs of the evaluation are unpredictable. It is difficult to cost and control. Without a very experienced evaluator to facilitate the process, it can degenerate from an ‘organic’ approach to one which is chaotic and lacking in focus. Also there may be concentration on process at the expense of outputs. 2 Evaluation Design for the Show Your Own Gold project

There were many criteria that were taken into account when selecting an evaluation model. For example, what was practical within the time scale of the project, what was the capability and capacity of the staff undertaking the evaluation, how ‘intrusive’ would the evaluation process be and so on.

Given the comparatively small size of the sample and the diversity of individuals participating many of the more empirical approaches were impractical as they were likely to yield insufficient data from which to draw valid or reliable conclusions that were more generally applicable.

A learning-oriented evaluation approach was reluctantly rejected because although it represented a best fit with the project aims and processes, the project staff did not have the capability in-house to undertake this. Clearly there would have been benefits had external expertise been available. However, there were additional reasons. The outputs of learning - oriented evaluation primarily benefit organisational learning rather than individual learning and the involvement of stakeholders from a range of organisational groups is important. After preliminary enquiries it seemed that this was unlikely to be practical.

The final choice was determined by consideration of the rationale underpinning the type of project and its funding. Show Your Own Gold was a pilot project which set out to develop and trial a new approach to working with disadvantaged young people which would help them build

Show Your Own Gold: Pontydysgu Evaluation Report

11

Page 12: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

a digital narrative of their skills and experience, which would contribute to building their confidence and increasing their employability. The key element is that it was a pilot project with the emphasis on exploitation and dissemination in order that other organisations could build on the results and implement a similar programme. Thus, the audience for the project outputs, including the evaluation, would be managers and policy makers. For this reason it was felt that a Management-oriented approach to evaluation would be the most useful, providing feedback that could usefully inform management decisions. Hence Stufflebeam’s CIPP model was adopted as the working model for the evaluation

The CIPP model is unique as an evaluation guide as it allows evaluators to evaluate the program at different stages, namely: before the program commences by helping evaluators to assess the need for the project and at the end of the project to assess whether or not the project had an effect.

2.1 CIPP Model of Evaluation

This evaluation uses Stufflebeam’s CIPP model which evaluates Context, Input, Process and Product to judge a project’s value. CIPP is a decision-focused approach to evaluation and emphasises the systematic provision of information for programme management and operation.

The CIPP framework was developed as a means of linking evaluation with programme decision-making. It aims to provide an analytic and rational basis for programme decision-making, based on a cycle of planning, structuring, implementing and reviewing and revising decisions, each examined through a different aspect of evaluation –context, input, process and product evaluation.

The CIPP model is an attempt to make evaluation directly relevant to the needs of decision-makers during the phases and activities of a programme. It is framework to systematically guide the conception, design, implementation and assessment of service-learning projects, and provide feedback and judgment of the project’s effectiveness for continuous improvement.

The following diagram represents the four stages of the evaluation process and the evaluation of the programme is reported under these main headings.

Show Your Own Gold: Pontydysgu Evaluation Report

12

Page 13: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

3.0 Evaluation findings4

3.1 Context evaluation: Goals

This stage of the evaluation answers the question ‘What should we do?’ It involves collecting and analysing needs assessment data to determine goals, priorities and objectives.

BackgroundThe project in Wales was undertaken in the former mining town of Pontypridd, which, together with 5 smaller communities has a total population of 30,000. Twenty percent of the population is Welsh speaking.

It is located in the municipal authority of Rhondda Cynon Taff. Pontypridd is at the heart of the Objective 1 Area ‘West Wales and the Valleys’ (UKL1). This is the poorest NUTS in Europe and has one of the highest indices of multiple deprivation (IMD). The GDP per head in West Wales and the Valleys in 2015 was 70 per cent of the EU27 average and was the lowest of the 37 NUTS-2 areas of the UK.

NeedsThe NUTS2 L15 region in which the project operated is one of the top 5 most deprived areas in the UK using the Laeken index and almost a quarter of young people are living in households below the poverty line, that is, where the income is less than 60% of the national median income

4 Stufflebeam’s original terminology has been used but where this does not easily translate, a more familiar term has been added in bracketsShow Your Own Gold: Pontydysgu Evaluation Report

13

Page 14: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

(Child Poverty Strategy: Children’s Commissioner). The area is characterised by high youth unemployment, low levels of educational attainment and low household income.

Young people in this environment typically lack skills, qualifications and opportunities. As a result, many suffer from low self esteem and low levels of confidence

BeneficiariesThere were 18 beneficiaries at the introductory evening but this dropped to 13. They were all students at Coleg y Cymoedd, a large, four-campus further education college providing post-compulsory and vocational training across four former mining valleys in South Wales. Most of them had no clear career plans or goals. The group was almost equally divided between male and female with ages ranging from 17 to 20

Resources

The only institution involved in delivering the training was Pontydysgu Ltd., the project partner, which has considerable experience of planning, organising and delivering training in a broad range of IT related areas, particularly in technology-enhanced learning. There were two trainers allocated to the course which was sufficient. All materials, resources and equipment were provided by Pontydysgu. All participants had the use of a laptop and a tablet or smartphone

Problems

Recruitment was a problem because the target group were difficult to reach. Many of them were already disillusioned by education in general and, in particular, what they perceived as ‘careers education’. There is a plethora of government initiatives for young people not in employment, education or training many of which are compulsory for those young people in receipt of welfare benefits. Many of these involve attending courses designed to improve employability. However, many young people see them simply as hoops to jump through and there is a high level of cynicism around their usefulness as all too frequently they do not lead to employment. Set against this culture, it is difficult to engage young people by offering the opportunity to attend yet another programme.

Interestingly, more participants were attracted by the opportunity to develop technology skills than the self-awareness or confidence building aspects, which were commonly occurring elements in other courses they had attended.

Another problem, linked to recruitment, is that other training providers who are paid to deliver programmes for unemployed young people are not cooperative as they perceive this course as competition.

One operational difficulty is that filming and photographing on educational premises where images of 3rd parties are included is virtually impossible. This is part of the staff and student safeguarding procedures and it is not sufficient to have authorisation from individuals. The situation is also subject to protocols established by trades unions and professional associations.

Show Your Own Gold: Pontydysgu Evaluation Report

14

Page 15: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

An additional - and significant - problem was that the training course being evaluated was not actually part of the project funding.

EnvironmentAccommodation was provided at no cost. However, the accommodation was not ideal and it was difficult to create a warm and cosy atmosphere in a room which was over large and with limited opportunities to move furniture. Several possible locations were considered and rejected on the grounds of accessibility, cost and unfamiliarity - basically the participants were happier in a place they knew rather than a strange environment. On the positive side, there was plenty of space for practical work and for pinning up and displaying work. It also made it easy for people to spread out and work on their own or in small groups. If the project was repeated, there is a good case for choosing a more ‘luxurious’ venue such as an hotel. Whilst this might initially be daunting for participants, it is nevertheless a way of showing that they are valued.

3.2 Input evaluation: Plans

This stage of the evaluation answers the question ‘How should we do it?’

“It involves reviewing the steps and resources needed to meet the goals and objectives and includes includes identifying successful external programs and materials as well as gathering information.” (Stufflebeam)

StakeholdersThe key stakeholders were identified as

● The young people who participated● The trainers● The project partners / managers● Others e.g the other teachers / youth workers etc that work with the young people

Stakeholder Analysis

Potential impact

Influence on the project

What is important to them?

How could they contribute?

How could they block the project?

Strategy for engagement

Young people Increased confidence, new skills, improved employability

High Getting a job, enjoying themselves, feeling they have learned something useful

Engagement and participation, suspending their disbelief

Cynicism, lack of commitment, writing off the course as being the same as all the others without giving it a chance

Effective marketing, developing the group as well as the individual

Show Your Own Gold: Pontydysgu Evaluation Report

15

Page 16: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

Trainers New skills + knowledge, professional development

High Making it work! Bringing very disparate skills sets and experiences - technical and pedagogic

Insufficient empathy, not understanding the situation of the young people

Developing commitment and positivity

Project partners

Additional ideas

Medium Piloting and disseminating their work. Successful exploitation

Sharing their ideas

Lack of shared vision.Misunderstandings because of language difficulties

Regular communication online and F2F

Other teachers/ youth workers

New ways of working

Low Finding new models to engage the target group

In depth knowledge of target group, knowledge of previous learning histories

Cynicism - a ‘seen it all before’ attitude

Involvement in planning and recruitment, feedback through evaluation report

Strategies

There was an introductory evening followed by a two day workshop. The first day focussed on digital identity and what sorts of information to present online when building a digital narrative as well as the types of story-telling tools available. During the second day the participants were invited to explore the demo platform before using the tools introduced in the previous session to begin to create their personal stories.

Aims To improve the vocational preparation of young people by supporting self development, authorship, collaboration, communication, multimodality and digital presence

Objectives By the end of the intervention participants will be able to:

explain the concept of digital vocational biographies / narratives (??), say how creating one could be useful to them and list some of the ways they could use them.

explore their own learning history and identify those events, activities, skills and learning processes that they want to illustrate and capture

identify ways in which they can record these experiences and list some of the media they could use.

use the platform and discuss its functionality, potential and usefulness and customise it for their own use

describe the key features of writing and telling a story and construct their own narrative

Show Your Own Gold: Pontydysgu Evaluation Report

16

Page 17: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

take photographs using a digital camera, manipulate an image, store and share pictures

use audio equipment and software to create a recording, edit a recording and publish it on the platform and elsewhere

use video equipment and software to create, edit and publish a video on the platform and elsewhere.

present content to camera and mic in a way which enhances the content

write text appropriate for a web site including adding hyperlinks

identify, explore and use other social media such as Facebook, Twitter, Pinterest, Linked In etc. and say how they could be useful (or not) to them.

explore a variety of other graphic techniques, apps and processes which they can use to create content e.g using internet radio, creating animations using stop motion, using different presentation software etc

make informed decisions about issues to do with privacy, sharing, publishing etc

discuss the concept of digital footprints, manage their online identity appropriately and discuss the implications of some of their decisions

state and comply with the laws on copyright, use Creative Commons licensing and be able to use and select open content music, video and images.

evaluate the programme and identify what they have learned and how they will use it.

Proposed design

The original design was to base the programme on 12 modules each lasting 2 hours. The plan was to deliver one module each week over 12 weeks. This was soon found to be impractical so the course was reduced to one introductory (Getting Started) evening of two hours plus 2 x 7 hour days totalling 16 hours rather than the 24 hours originally planned. The presentation evening was omitted in favour of awarding certificates at the end of the course and the evaluation was folded into the final afternoon plus some follow up interviews so the actual course was just 4 hours shorter than planned.

Coverage

Module Indicative content

1 Getting started ● What are digital vocational biographies / narratives (??), why should we create one and how can we use them?

● Assessment of own needs for multimedia vocational biography● Reflecting on own learning and picking out the things they want to highlight● Looking at other examples● Finding out how the platform works● Introduction to different media and file types that can be uploaded to platform● Macro-planning

2 Story telling ● Why are stories useful?

Show Your Own Gold: Pontydysgu Evaluation Report

17

Page 18: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

● What makes a good story? Listening and looking at examples● Practical work on scripting and telling a story – narrative units● Taking pictures to illustrate a story● Getting to grips with file formats – (jpg, png, tiff etc) and experimenting with simple

manipulation of images. (e.g cropping, exposure, intensity, contrast, simple effects etc)

● Cloud storage and local storage – e.g use of Flickr, iPhoto etc

3 Using audio ● Telling a story in audio – advantages and disadvantages, painting a picture with words

● Familiarising with equipment● Speaking into a mic – use of voice● Storyboarding and telling a story in sound● Using Audacity or Garage Band to record.● Editing using Audacity or Garage Band● Adding sound effects and music – sources, how to add another track, copyright etc● Storing and exporting audio files

4 Using video ● Looking at some autobiographical videos, exploring different approaches● Familiarising with equipment● Simple storyboarding● Shooting a short video (without sound)● Using iMovie or MovieMaker to edit video● Adding a sound track (music, voice over or sound effects)● Shooting a short video with sound● Adding stills and text● Tools for annotating videos● Storing, exporting and sharing video, embed codes etc

5 Self presentation ● Creating and projecting an image● Engaging an audience● Speaking to camera / mic● Habits and mannerisms● Appearance, posture

6 Writing for the web ● What makes writing for the web different? Linear text vs hyperlinked text● Writing in ‘chunks’● Stories, opinions, information, persuasion, captions● Practical writing

7 Social Media ● What do they use already?● Creating a pinterest board / facebook page / Linked-in profile / Twitter following

8 Creating presentations

● What software is available?● How much content?● Designing a slide

9 Animation and cartoons

● Experimenting with some alternative formats that can add an extra dimension to their story

● Chance to make 2D and / or 3D stop motion of a learning event● Opportunity to use animation software (e.g Moovly)● Creating a cartoon or comic strip (e.g using Comic Life)

10 Digital identities ● What is a digital identity● Digital footprints

Show Your Own Gold: Pontydysgu Evaluation Report

18

Page 19: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

● Privacy and security● Data protection

11 Presentation night / social event

omitted

12 Where next and course evaluation

combined

In reality, some modules were omitted because no one was interested (e.g the animation session) or shortened (e.g Digital Identity). Also, not everyone did all the modules. A two day format was particularly helpful in this regard as it allowed participants to work on different things at different speeds depending on the content they were creating.

Following the transnational partner meeting we also incorporated several activities designed by the Spanish partners especially into modules 1,2 and 5

BudgetApart from the actual delivery time, each trainer had 5 days preparation time. This took into account that some materials were already prepared. So total staff costs were approximately:

Publicity etc 6 hoursMeetings and visits 6 hoursPreparation 70 hours (x 2 trainers)Actual teaching time 32 hours (x 2 trainers)TOTAL 114 hours

Approximate cost of expendable materials and resources 100€

Equipment all provided by trainers

None of these costs were eligible under the project budget so came from partners own funds

At UK wage rates the approximate cost for the course was 4000€.

BudgetApart from the actual delivery time, each trainer had 5 days preparation time. This took into account that some materials were already prepared. So total staff costs were approximately:

Publicity etc 6 hours

Show Your Own Gold: Pontydysgu Evaluation Report

19

Page 20: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

Meetings and visits 6 hoursPreparation 70 hours (x 2 trainers)Actual teaching time 32 hours (x 2 trainers)TOTAL 114 hours

Approximate cost of expendable materials and resources 100€

Equipment all provided by trainers

None of these costs were eligible under the project budget so came from partners own funds

At UK wage rates the approximate cost for the course was 4000€.

Research

Much of the research was web based as the notion of digital vocational narratives is relatively new and there have been few published papers. The research covered 4 main strands. The initial research report investigated the socio-economic context, the VET and pre-VET system and the responses to youth unemployment. The second stage of the research was profiling the young people we were targeting. This was written up as a report and is available on the website. The third stage was looking at work that had been undertaken on constructing digital narratives and the tools and techniques that could be used. This was by far the most useful and informative research and the one which had greatest impact in practice on the course. The research outputs all appear as articles or posts on the website. In parallel with that we also researched possible interventions, pedagogical approaches and new ideas that could be incorporated. Research into possible evaluation approaches was also part of the research agenda.

3.3 Process evaluation: Actions

The third stage of the evaluation examines the process and answers the question ‘Are we doing it as planned?’

This provides decision-makers with information about how well the programme is being implemented. By continuously monitoring the project, we were able to judge how well the project was following the plans and guidelines, what conflicts occurred, what staff support was needed and offered, the strengths and weaknesses of the materials, the delivery and any budgeting problems.

Development

The initial phases of the project, prior to actually running the in-country courses were stimulating and informative with a useful and healthy exchange of ideas. There was some inevitable frustration because of cultural and language difficulties when concepts and ideas did not Show Your Own Gold: Pontydysgu Evaluation Report

20

Page 21: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

translate easily from one country to another. There were also tensions between overall approaches ranging from technology focussed at one extreme to ‘soft-skills’ and personal growth approaches at the other. However, these were reconciled when it became apparent that these were not mutually exclusive rather it was a case of which skills were ‘foregrounded’ and which were ‘backgrounded’.

Also each country was running at different timescales which was inevitable because of local conditions and constraints. The course in Wales benefitted from being last as it was enhanced by the feedback from other countries. Nevertheless, the different time lines did cause some problems.

Input

Given that the pilot courses in each country were so varied, a crucial part of the evaluation was to identify key evaluation questions that would be applicable to each national context. It was agreed that the evaluation questions to be addressed by the national teams regarding the pilots would be framed as:

1. What did we do well and why?2. What should we have done more of?3. What should we have done that we did not do?4. What did we do that we should have done less of or stopped doing?

Monitoring

In Wales evaluation data was collected in two forms, firstly by giving the participants post-it notes and asking them to write their comments about the training and the platform demonstration. These were rank ordered by participants allocating a number of ‘votes’ to each. The second data gathering activity was asking the group for verbal feedback about what they liked, what could be improved and whether they would use the tools or platform in future supported by a sentence-completion exercise

Feedback

1 What did we do well?

The collected results in order were● It was a great atmosphere / it was fun / really enjoyable● Really clear input on how to do things / clear explanations /made things seem easy● We did things we thought we’d never do / learned so much in a short time ● Made me feel more confident / didn’t feel stupid / I knew more than I thought● Talked to us like we were adults / didn’t boss us around● People helped each other / working in groups / sharing skills● Gave us skills we could use in a job / in looking for a job

Show Your Own Gold: Pontydysgu Evaluation Report

21

Page 22: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

● Found out lots of stuff we didn’t know / what Creative Commons is

What should we have done more of?

The collected results in order were More time for everything● More handouts not just on web● Got us to make a decision about what we were going to do earlier because we ran out of

time● More time to work on our own● More work in groups● More focus on the storytelling tools and methods and less on the e-portfolio/platform

What should we have done that we did not do?

The collected results in order were● Found different room● Room was either too hot or too cold● Got sandwiches - not having to go to refectory● Told us more about the course in advance - more people would have come● Better internet connection!

4. What did we do that we should have done less of or stopped doing? The collected results in order were

● Talked about the platform● Gave us another account/password to remember/platform to update● Apologising about the room!

(For the purposes of the workshop the demo platform http://mytrainingforgold.eu/ was presented to participants and they were invited to explore it using guest login.

3.4 Product evaluation: Outcomes

The final stage of the evaluation answers the question ‘Did the programme work?’

By measuring the actual outcomes and comparing them to the anticipated outcomes, decision-makers are better able to decide if the project should be continued, modified, or dropped altogether. This is the essence of product evaluation.

Impact

The consensus was that the young people enjoyed the course and said they had learned a lot. There were far more positive comments than negative ones. Mostly they said they had

Show Your Own Gold: Pontydysgu Evaluation Report

22

Page 23: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

developed technical skills rather than ‘employability’ skills but nevertheless felt that these technical skills would be useful to a future employer. Several of the group said their confidence had improved but had difficulty explaining in what way and whether they thought this was a long term change.

The issues which had the greatest impact were around the need to understand and manage privacy and digital footprints and how online identities could create the right - or wrong - impression for future employers or educators.

The impact on other stakeholders is difficult to measure as no other organisation has run the course. However, when other private sector agencies and public sector providers realised that this was a pilot initiative and Pontydysgu were not intending to continue to run the course there was interest from several people in having copies of the programme and the activities with a view to incorporating it in their own provision.

Effectiveness:

The young people all judged the course to be effective and fit for purpose. A surprising outcome was that several of them said it made them check their privacy settings. However, they had reservations about the platform, which they did not see the point of. They were reluctant to sign up to new accounts, being comfortable with the platforms they already used and would prefer to see a better way of integrating or using these rather than having more new ones. All participants had a Facebook, Instagram, Snapchat, YouTube and Pinterest account. Some used Twitter, some had a blog and some used Skype. One or two had used other collaboration software.

The problem was not the actual Gold platform - they were generally happy in using it although some complained that it was not as intuitive as those they were already using (but this could have been to do with lack of familiarity.) It should also be borne in mind that most educational institutions in the UK have their own platforms and whilst participants found the demo platform useful to contextualise the outcomes of the workshop, they were reluctant to use an external tool to share their work when they already had their “own” platforms. “If I’m going to show this to an employer I’ll write my own blog” “Our college already uses an e-portfolio. I’d rather add my new work to that than start a new page.”

Transportability (Transferability)

Show Your Own Gold: Pontydysgu Evaluation Report

23

Page 24: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

With some modification to overcome the problems already highlighted, the course could be replicated in a wide range of contexts. However, it is resource heavy and may not be a cost effective solution for small or one-off use.

ScalabilityFor exactly the reasons mentioned above, the course would lend itself well to scaling up and given the relatively high cost, there would be significant savings so that the unit cost would be realistic. In particular, the cost of high capital outlay could be spread over a greater number. It may be technically possible to deliver all or part of the course on-line to a much larger cohort but this is unlikely to attract the same target group. Part of the rationale of the face-to-face group is to develop confidence, motivation and engagement in a group of largely reluctant learners. It is unlikely that this could be achieved on-line. However, there could be signposting to some follow up or additional online modules.

Most of the modules lend themselves to online delivery so it may be that an on-line version of the course would be appropriate for a different demographic such as women returning to the workforce.

SustainabilityFor this course to be sustainable it would need to be adopted by mainstream providers in either or both the public and private sector. This is unlikely to happen in the short term. However, it is likely that elements of the course, including the core idea of vocational digital narratives, will be adopted by some of the agencies we are in contact with and to whom we have promised access to all the materials and programme details.

Adjustment (Adaptability / flexibility)

A key learning point for future projects is that we should not be focussing on developing platforms for young people but rather starting with the ones they use and extending their abilities to exploit and combine those more effectively and more creatively.

A second learning point is that this project might be better suited to a slightly older age range or those returning to learning. The young people we worked with were disenfranchised, lacking in confidence and reluctant to share their ‘digital story’ even with people in the group without a lot of encouragement. Sharing and publishing in a public arena would take many more interventions.

A final learning point is that the young people recognised the value of story-telling and said that in the future, they were more likely to share stories or to think about using a story to explain what they had done.

Show Your Own Gold: Pontydysgu Evaluation Report

24

Page 25: Web viewEvaluation of the pilot course in Wales. Pontydysgu. September 2017. Section 1: The Evaluation Framework. Evaluation rationale and theoretical perspective

Appendices

Word cloud derived from participant feedback

Word cloud derived from tutor feedback

Show Your Own Gold: Pontydysgu Evaluation Report

25