€¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also...

39
Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 1 Master’s Project Evaluation Plan Technology Artifact Erin Young Instructional Design Technology Master’s Candidate University of Cincinnati

Transcript of €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also...

Page 1: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 1

Master’s Project Evaluation Plan

Technology Artifact

Erin Young

Instructional Design Technology Master’s Candidate

University of Cincinnati

Abstract

Page 2: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 2

This evaluation plan aims to describe the steps to properly evaluate a Reading

Strategy screencast created while completing the master’s project at The University of

Cincinnati during the Fall 2019 semester. It focuses on Dick, Dick, & Carey’s formative

evaluation design from their text, The Systematic Design Of Instruction (2015) with a

focus on one-to-one evaluation with learners. It also includes information from

Tesmmer’s (1993) One-to-One Evaluation. It goes through the participants, the steps of

formative evaluation design, the steps of one-to-one evaluation, the steps of analysis,

and a timeline on completing the research necessary to complete the evaluation plan. It

also discusses the research behind formative evaluation design and one-to-one

evaluation.

Master’s Project Evaluation Plan

For a technology artifact: Reading Strategy screencast

Page 3: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 3

For this evaluation plan, I will be evaluating a screencast created during my

coursework at the University of Cincinnati. The assignment called for the creation of a

screencast and the mastery of creating one. For the course, I created a screencast

teaching different reading strategies. The sceencast is geared to teach first to second

grade reading students how to utilize reading strategies specifically based off of Marie

Clay’s Reading Recovery framework. The sceencast is also designed for elementary

teachers looking to learn more about elementary reading strategies as well. This

evaluation plan will help to determine how to evaluate this screencast to look at its

technology features, the learning object itself, its accessibility, and how well learners

can learn from it including how effective it is as an educational tool.

Audience

This evaluation plan has an audience composed of the Master’s Project

professor, other classmates and peers taking the Master’s Project course at the

University of Cincinnati, and the evaluators of my master’s portfolio. Once completed,

the reading strategy screencast evaluated in this plan will also have an audience of

elementary reading intervention students who will watch and evaluate the screencast, in

addition to other reading intervention teachers and peers at my district and in my

Reading Recovery cohort. I also expect the Reading Recovery teacher leader from the

Solon, Ohio site in addition to other Reading Recovery students to be part of the

audience as well.

Sources

This evaluation plan was created based upon Dick, Carey, & Carey’s (2015) The

Systematic Design of Instruction. It follows the formative evaluation design with a focus

Page 4: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 4

on one-to-one evaluation with learners. It also follows Dick, et al.’s (2015) data analysis

for one-to-one trials as well. Tessmer’s (1993) One To One Evaluation featured in

Planning And Conducting Formative Evaluations: Improving The Quality of Education

and Training was also used to look again at one-to-one evaluation and the steps

involved. It also follows the formative evaluation design with my focus on one-to-one

trials.

Questions To Be Answered

This evaluation plan aims to answer some crucial questions throughout the

evaluation process. They include the three central criteria that Dick, Carey, & Carey

(2015) strive to answer in their own evaluation process regarding clarity, impact, and

feasibility. When looking at clarity, Dick et. al looks at whether or not “...the message, or

what is being presented…” is “clear to individual target learners,” (p. 288.) They also

look at impact, specifically looking at, “what is the impact of the instruction on individual

learner’s attitudes and achievement of the objectives and goals,” (p. 288). Lastly,

feasibility looks at, “how feasible is the instruction given the available resources

(time/context),” (Dick, et al., Carey, & Carey, 2015, p. 288). Dick et al. (2015) go more in

Page 5: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 5

depth in table 11.2 on p. 289.

Once these questions are answered throughout the evaluation process,

specifically looking at the one-to-one trials that will occur, a true picture will emerge on

what can be improved in regard to my reading strategy screencast. Students will also be

asked to answer questions specifically about the clarity of instructions, text, objectives,

directions. In addition, they will be asked to answer questions about if instruction is

complete, about the number of examples, explanation, graphics, visuals, and even

spelling and grammar errors, (Tessmer, 1993). Appendix 1 and Appendix 2 include

more specific questions that will be used as well. These questions will be asked once

the students finish watching the screencast and participate in a continuation of their

one-to-one trial reflecting on their experiences with the screencast.

Evaluation Methodology

Page 6: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 6

Participants

Three different students from different reading intervention groups will watch the

reading strategy screencast. The students will be comprised of three students on my

current reading intervention caseload, with varying ability levels.The students will be two

third graders, and one fifth grader, all from different classes and seen at different times

throughout the school day. None of the students are seen together and each has an

individualized reading program with me. These particular grade levels were chosen due

to the nature of the tool and their ability to evaluate it. My current caseload includes first

graders, third graders, fourth graders, fifth graders, and sixth graders. The first grade

students are too young to be able to accurately evaluate the screencast and understand

what is to be accomplished. They also are taught via the Reading Recovery framework

and are not able to participate in this study. I did not pick my sixth grade students as

they are English language learners and I do not provide reading intervention specifically

for them. Instead, I provide English as a second language for these two students. There

is a language barrier with these students, which may also make evaluation difficult for

them. The tool may also not be appropriate for them due to the services they are

receiving. I decided it would make more sense to focus specifically on students that

receive reading intervention from me and read at approximately the same reading level.

The third graders and fifth grader on the other hand, all receive reading intervention

from me, although at different times throughout the day. They are familiar with the

Reading Recovery framework, which is where the strategies in the screencast are

based off of and are also familiar with and comfortable with my teaching style as well.

This will help them to understand what is being taught and what is being asked of them

Page 7: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 7

in evaluation. This will also help to get the best possible feedback and the most possible

data as they are already comfortable talking with me and sharing what is on their mind

and sharing their true opinions and feelings. The students also all have average

reading abilities and vary between significantly below grade level and just barely below

grade level. This will give me the best chance to get a true picture of what can be

improved in regard to the screencast.

Tessmer (1993) also discusses the idea of making sure to choose learners that

will give the most possible data. This includes thinking about students who will be open

and honest and discuss what they are seeing and feeling. If students will not share what

they are thinking and what they are experiencing while watching the screencast, it will

be very difficult to obtain data from them and form a truly effective trial. Tessmer (1993)

also mentions making sure that students are comfortable and that they will understand

what their task is. The particular reading screencast is designed for elementary students

reading at or around a second grade reading level, which is also why these particular

students are appropriate. All three students are reading at a second grade reading level.

This is why I decided against using my first grade students, as this would be above their

current reading level, which would make it more difficult to get an accurate picture of the

effectiveness of the screencast. Students will also be from a variety of different

socioeconomic classes, including one student who receives free and reduced lunch. All

three students are a great representation of my caseload and the ability level and

cognitive level of my students.

Methodology

Page 8: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 8

The methodology that I will be using follows the formative evaluation design,

specifically the one-to-one evaluation with learners. Dick, Carey, & Carey (2015)

discuss the importance of revising instruction based upon data collection and analysis

used during the formative evaluation design process. This allows educators to truly see

if their instruction is effective or not (p. 285). I will conduct multiple one-to-one sessions

with learners while watching my reading screencast. In addition, I will conduct sessions

with these same students after they have finished watching the screencast to get the

best possible feedback and most information possible. Three individual learners, with

varying ability levels, and varying attitudes, will have trials with my screencast so that I

can receive the most valuable feedback possible. This will allow me to have immediate

feedback with authenticity, while also being time efficient since I will receive the

feedback in real time. This fits in well with my screencast because the lesson is meant

to be seen by other students and other teachers as well. If my instruction is not clear

and easy to understand, my teaching will not come across and learning will not occur.

Along these same lines, teachers will not be able to learn what teaching strategies to

use if I am not clear and concise. This is why evaluation is so important, not only for

students, but for other teachers as well. According to Dick et al. (2015), one-to-one

evaluation with learners helps to, “identify and remove the most obvious errors in the

instruction” (p. 288). It is crucial to be able to fix these elements in a screencast that is

accessible to many different people, even those who have not been directly taught by

me.

Evaluations And Guidelines

Page 9: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 9

One-to-one evaluation by students. In order to conduct research to evaluate my

reading screencast, I will use the first stage of formative evaluation of instruction, one-

to-one evaluation. This stage involves selecting one to three learners to individually

work with to get honest and immediate feedback through a direct approach. These

learners will represent the larger population of learners who will view my screencast

(Dick et al., 2015, p. 288). This will allow me to get a true insight into the effectiveness

of my screencast and what students are truly feeling about it. During this evaluation

stage, I will be able to make crucial decisions about my screencast and how to revise it.

“The three main criteria and the decisions designers make during the evaluation

are as follows:

1. Clarity: Is the message, or what is being presented, clear to individual

target learners?

2. Impact: What is the impact of the instruction on individual learner’s

attitudes and achievement of the objectives and goals?

3. Feasibility: How feasible is the instruction given the available resources

(time/context)” (Dick et al., 2015, p. 288).

Collecting data guided by these three criteria will allow me to see if my initial thoughts

were correct when looking at student opinion and needs or if there are misnomers and

large areas that need revision.

As discussed above, Dick et al. (2015) suggest a few specific criteria to keep in

mind and have students focus on during our one-to-one trials: “clarity of instruction,

impact on learner, and feasibility” (p. 289). When looking at clarity of instruction, Dick et

al. (2015) suggest looking at the clarity in message, links and procedures, table 11.2, p.

Page 10: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 10

289. When looking at impact on learner, they suggest looking at attitudes and

achievement, table 11.2, p. 289. When looking at feasibility, they suggest looking at

learner and resources, table 11.2, p. 280. These criteria will allow me to make sure that

all areas are being analyzed and evaluated. More specific criteria is included in the text

in Dick at als. table 11.2 from p. 289,

Students will also be asked questions from the one-to-one questions in Appendix 1,

specifically about the screencast and their experience with it. Questions range from

asking about the clarity of the instructions to the text, to the objectives, in addition to the

directions and understanding the instructions. Questions also look at the clarity of the

text, images, explanations, how boring or exciting the screencast is, and if the visual is

adequate. I will also have students complete a one-to-one log, Appendix 2, where they

can tell me any observations they have while watching the screencast, they can tell me

Page 11: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 11

what they liked the most, what they would change, and other notes that they have for

me. This is a great way to have a conversation with students about what they are really

feeling while watching the screencast.

Setting Standards and Data Collection

In order to collect data for evaluation of the screencast, I will use Kaltura to

record my interactions with students while completing the one to one trials. Kaltura is a

recording program that I can install on my computer that can record exactly what

students are doing on the computer in addition to whatever conversations are being had

in front of the computer as well. It was introduced to me during my coursework at The

University of Cincinnati. I have utilized it in many different classes throughout my

master’s program at the university. In addition, I will take notes as students are

completing the one to one trials, specifically focusing on Dick et al.’s suggestions from

table 11.2 I will also make sure to use data collection sheets suggested by Tessmer

(1993) as seen in Appendix 1, Appendix 2, and Appendix 3. Tessmer (1993) discusses

the importance of managing the evaluation while collecting data. This means that there

is an ongoing conversation occuring while students are viewing the screencast. Not only

is Kaltura being used to record students, but I am also asking questions and filling out

the one-to-one sheets as students are completing the trials. Tessmer (1993) suggests

listening closely and carefully while observing, but also making sure to be an active

listener by praising students for answering, asking follow-up questions when

appropriate, helping if necessary, expressing that this is to find out what is wrong with

the screencast, not what is wrong with them, and to be encouraging and open. This also

ties into making sure to ask follow-up questions once the screencast portion is over.

Page 12: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 12

Tessmer (1993) explains that there is a follow-up portion where the Appendix 1, and

Appendix 2 questions can be asked. This is where more of the active conversation can

be had and where debriefing should occur. Students should even be asked what they

think of this process as well, not just what they think of the screencast they just

watched. It is crucial to utilize this time with participants (Tessmer, 1993)./

Decision Making and Final Report

Once the trials have been completed, I will utilize the Kaltura videos and my

notes to fill out the one-to-one data sheet for each student, appendix 3. This will allow

me to look at the data from Appendix 1 and Appendix 2 and compile it into one place to

really get a final picture on what my students truly thought of my screencast. This is a

great way to summarize all three one-to-one sessions in an easy to read and

understand way. This is also where I am able to watch the Kaltura videos one more time

to make sure that I am not missing any key information from watching the students

watch the screencast and watching the conversations I had with students during and

after. I want to make sure I include anything that was discussed as I do not know what

is relevant until I see it on paper. The data collection papers in Appendix 1, Appendix 2,

and Appendix 3, are such crucial pieces of the final report and decision making process

because they will tell me where my areas of improvement area. They are the final

pieces to truly show me exactly where students have suggestions and need changes to

understand content better.

Page 13: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 13

Evaluation Instruments

Sampling Methodology

According to Dick et al. (2015), “one of the most critical decisions by the designer

in the formative evaluation is the selection of learners to participate in the study,” (p.

288). In order to get accurate feedback, I will follow Dick at als.(2015) advice, and have

to make sure to be very selective in the learners I select. They discuss that learners do

not need to be random as this is not an experiment and should not be treated as such.

They also discuss that the first factor to keep in mind is that the goal is to get the most

honest feedback possible in order to create the best possible product for my students

and those who will be learning from my screencast. This is nearly impossible to do with

learners who have never met the designer because of the specific reading strategies

taught in the screencast and its tie to Reading Recovery, which is the framework it is

loosely based off of. Instead of picking random learners, learners should be chosen

very carefully (p. 288). In order to get the most accurate feedback, I have to make sure

I have a strong rapport with the learners. This will allow the learners to feel comfortable

telling me their true and honest feelings, even if not favorable. Picking random students

whom I have never met could cause a major issue in getting honest feedback as

younger students especially may have a difficult time opening up to teachers they do

not know. They may be scared to share their true feelings, especially if they do not like

something or have found an area where improvement is needed. Dick et al. (2015) also

mention that, “for the one-to-one phase of the formative evaluation, the designer may

Page 14: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 14

wish to select one learner with a very positive attitude toward that which is being taught,

one who is neutral, and one who is negative,” (p. 289). I think this is important because

again it will show a true representation of all learners that I work with. I made sure to

keep this in mind during my selection process for this evaluation. One of the students I

picked is a student that does not always have the best attitude in regard to his learning.

He tends to get frustrated when reading gets difficult and often tries to deflect and even

try and get me off topic. It is important to include him because that way I am able to

have that true representation and will not just have students trying to please me. I know

that he will give me honest feedback and will not hold back if he does not like

something.

In addition, it is crucial to make sure learners represent a large range of ability

instead of selecting similar ability levels. It is key to make sure that average ability and

low ability students are represented. In the case of this particular evaluation, I chose to

have students from various grade levels reading at a second grade reading level. This is

a great way to include a variety of ability levels, while still making sure that students are

at an appropriate reading level to best learn and evaluate the screencast. There is no

reason to have high ability learners because they are not representative of the larger

group, (Dick et al., 2015, p. 288). Attitude, previous learning, experience with the topic

of the blog post, age, and even preconceived notions must also be considered when

selecting students for the one-to-one evaluation (p. 288). Again, it is vital to get a true

picture of what learners are feeling and a true representation of my students as well. It

is crucial to remember that there are other factors instead of just ability level to consider

throughout the selection process, (Dick, Carey, & Carey, 2015, p. 289).

Page 15: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 15

Challenges

As with many evaluations, there are challenges to consider, the first of which

includes the time constraint for conducting the one-to-one trials. Due to the nature of the

Master’s Project, all research, analysis, and conclusions, must be completed by the end

of one semester. This can be rather challenging with the addition of other classes and

teaching full time. Although time constraints are a real concern, I know that I will not let

it prevent me from following my timeline. In addition, it is worth mentioning that it may be

challenging to get the participants to complete the trials within this timeframe as well.

School schedules are very tight and my teaching assignment is very specific in nature

as I teach Reading Recovery and Wilson Reading. One to one trials will need to be

completed during non instructional time, which may prove difficult. The trials will be

completed, but it may not be the easiest process. I will have to ask teacher permission

to see students during my planning period, which may again prove difficult. I know that

my coworkers will be understanding especially because of the nature of the assignment,

but it is still worth mentioning as it will be a difficult challenge to overcome. In addition,

Kaltura is not installed on my school computer, so I will need to get permission to add it.

It should not be an issue, but may again take a bit of time as I will need to ask our

technology person to install it. I am not able to just install programs on my computer and

must have him physically install it for me. Another challenge to consider is the young

age of my participants. Since they are elementary students, they may not always

participate as expected or as I would like. I may need to find different ways to motivate

them. I know that my students will try to please me and will try to do as I ask, but I must

consider the fact that this is different from their normal reading intervention framework

Page 16: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 16

and different from what they normally expect. I believe that my students will enjoy this

experience, but it is worth considering and acknowledging that kids are not always

predictable and may not always behave as we think they will.

Analysis Procedures

In order to analyze the data, I will take the information gathered from the one-to-

one trials, and look at each area of formative evaluation criteria. I will look at the three

areas, “clarity of instruction, impact on learner, and feasibility,” (Dick et al., 2015, p. 289)

and go through the chart Dick et als. chart 11.2 (2015) p. 289. with each learner I

worked with.

I will also look at all the notes that were gathered throughout the sessions and find

areas of strength, areas of growth, and areas of weakness as well. I will then take this

data and go through each criterion and graph the responses from all three students.

Page 17: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 17

This will allow me to get a true picture of what I need to improve on in regards to my

screencast and how to make it better for learners. In addition, I will also look at my two

data sheets, Appendix 1 and Appendix 2, and make sure to analyze that data as well

when filling out Appendix 3. Dick et al. (2015) discuss that “particular aspects of the

instruction found to be weak can then be reconsidered in order to plan revisions likely to

improve the instruction for similar learners,” (p. 292). They also warn that, “one caution

about data interpretation from one-to-one trials is critical: Take care not to

overgeneralize the data gathered from only one individual. Although insuring that the

participating target learner is representative of the intended group helps ensure that

reactions are typical of other target group members, there is no guarantee that a second

target learner will respond in a similar manner,” (Dick. et al, 2015, p. 292). Instead, I

must remember that each student is different and will have differing views with different

data to analyze.

Dick et al. (2015) also mention that, “the outcomes of one-to-one trials are

instruction that (1) contains appropriate vocabulary, language complexity, examples,

and illustrations for the participating learner; (2) either yields reasonable learner

attitudes and achievement or is revised with the objective of improving learner attitudes

of performance during subsequent trials; and (3) appears feasible for use with the

available learners, resources, and setting,” (293). It is crucial to take all of this

information and compare similarities and differences in order to make the best possible

instructional changes to my screencast. In addition, there are different types of basic

information available to be analyzed as well. Dick et al. (2015) explain that, “the first

step is to describe the learners who participated in the one-to-one evaluation and to

Page 18: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 18

indicate their performance on any entry-skill measures. Next, the designer should being

together all the comments and suggestions about the instruction that resulted from

going through it with each learner,” (p. 318).

After gathering all of this information, Dick et al. (2015) suggest revising

instruction. At this point, easy revisions have likely already been done through the initial

one-to-one trials, yet now is the time to really reflect on what else can be done to make

instruction better, specifically looking at the difficult revisions. This is the time to look at

whether or not objectives need to be clearer, if instruction needs to be improved for

learners to have a better understanding of content, and where exactly learners made

mistakes during the one-to-one trials so that misinterpretations can be fixed, (p. 319).

Dick et al. (2015) express that, “the usual revisions at this stage are ones of clarification

of ideas and the addition of deletion of content, examples, and practice activities,” (p.

319). Dick et al. (2015) also state that, It is important to remember that at this point it is

also okay to note areas to change in the future when using small groups.

Timeline

Below is a short timeline of how I will be evaluating my reading screencast. It is

important to note that one-to-one evaluations and analysis begin on October 6th, with

the first evaluation draft beginning on October 13th. Artifact revision comes on

Page 19: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 19

November 3rd with the portfolio draft due on November 10th. The final presentation and

defence will be on or around December 8th, which concludes my master’s degree.

Timeline

Task: Dates:

Evaluation Plan Draft September 8

Evaluation Plan Peer Feedback September 15

Evaluation Plan Revision September 22

One-to-one Evaluations & Analysis October 6

Evaluation Report Draft October 13

Evaluation Report Peer Feedback October 20

Evaluation Report Revision October 27

Artifact Revision November 3

Portfolio Completion Draft November 10

Portfolio Completion Peer Feedback and Revision

November 17

Presentation Preparation November 24

Presentation and Defense December 8

Conclusion

The Reading Strategy sceencast created during my education program at The

University of Cincinnati, will be evaluated following the formative evaluation design with

a focus on one-to-one evaluation with learners discussed in Dick, Dick, & Carey’s

(2015) The Systematic Design of Instruction and discussed in Tessmer’s (1993) One To

Page 20: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 20

One Evaluation. This evaluation plan goes through the steps of the one-to-one

evaluation process, focusing on how specifically this screencast will be evaluated and

analyzed. It looks at the participants in the one-to-one trials, what types of questions

they will be asked, and what types of information will be helpful in regard to this

evaluation plan. Finally, this evaluation plan discusses how the data obtained

throughout the evaluation process will be analyzed and the steps that will be taken to

revise the screencast created during this program. Each step will follow the included

timeline, created based off of the timeline of the Master’s Project class.

Author’s Note

It must be noted that the Reading Strategies Screencast was a beginning attempt

at creating a screencast at the beginning of this master’s program. At the time of the

project, I was just learning how to create a screencast and was not very familiar with

this type of technology. It is a great tool to use for this evaluation plan because of how

new I was to this program when creating it. The creation of this evaluation plan and

following the steps of my timeline will be a great learning experience for me. I expect to

learn a lot and will continue to improve as I gain more experience in this field. I fully

expect to come out of this with a much better product and with more knowledge about

the evaluation process and improving screencasts.

Page 21: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 21

References

Dick, W., Carey, L., & Carey, J. (2009). The systematic design of instruction (7th ed.).

Columbus: Pearson.

Tessmer, M. (1993). One-to-One evaluation. In Planning and conducting formative

evaluations: Improving the quality of education and training (pp. 70-100).

Routledge.

Page 22: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 22

Appendix

Appendix 1:

One-To-One Student Questions

1. Is the instruction clear?

● text confusing?

● something clear but out of place?

● are objectives clear?

2. Are the directions clear (like #1 but often overlooked):

Page 23: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 23

● understood directions?

● can tell what directions say to do?

● can they do what directions say?

3. Is the instruction complete?

● more/less examples

● enough explanation

● need graphics

4. Is instruction too difficult/easy?

● too easy/boring

● too hard/frustrating?

● enough time?

5. Is visual and aural quality adequate?

6. Are there typographical or grammatical errors?

Page 24: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 24

Tessmer, M. (1993). One-to-One evaluation. In Planning and conducting formative

evaluations: Improving the quality of education and training (pp. 70-100).

Routledge.

Appendix 2:

One-To-One Log

Learner name:

Background:

UNIT 1

A. Learner comments Comments

Page/screen 1

Page/screen 2

Page 25: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 25

B. Evaluator questions Answer

What did you think of the screencast?

What would you change?

C. Aspects to observe/record

Observation/performance

Aspect 1

Aspect 2

Tessmer, M. (1993). One-to-One evaluation. In Planning and conducting formative

evaluations: Improving the quality of education and training (pp. 70-100).

Routledge.

Page 26: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 26

Appendix 3

One-to-one datasheet for comments, observations and question information

Learners

Learner comments L1 L2 L3

Comment 1, e.g. “Practice items too easy”

x x x

Comment 2

Observations

Page 27: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 27

Observation 1, e.g. “Trouble using help system”

x x x

Observation 2

Evaluation questions

Question 1 Answer Answer Answer

Question 2 Answer Answer Answer

Tessmer, M. (1993). One-to-One evaluation. In Planning and conducting formative

evaluations: Improving the quality of education and training (pp. 70-100).

Routledge.

Page 28: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 28

Revision Notes:

I changed the title of my document to include Running Head. I added in page breaks when Necessary. I changed my list to be written out In regard to some of the questions asked. I madeSure to describe my steps better so that an Outside person would be able to follow myEvaluation plan. I simplified some of my languageAnd also made sure to explain my rationale for Picking the specific students that I picked. I took Out the word blog as I had originally been Planning on using a different artifact. I checkedSome of my specific language and took out someOf the use of area and changed to criteria and Explained more. I went back and included the Chart I was referring to and added in my Worksheets for my appendix. I made sure to Add in more sources and direct quotes. I made

Page 29: €¦  · Web viewOnce completed, the reading strategy screencast evaluated in this plan will also have an audience of elementary reading intervention students who will watch and

Running Head: MASTER’S PROJECT EVALUATION PLAN- TECHNOLOGY ARTIFACT 29

Sure to add in another source and backed upMy information. I added in a paragraph before My timeline. I went back through to make sureThat I was following a step by step guide andThat my paper was organized and made sense.I went and filled in when necessary and expanded On my ideas as well. Overall, I went back throughAnd looked at my objectives and made sure they Were clear and that I was meeting each one.