MJECT V4 N1C - wiu.edu

40
MJECT MIDWEST JOURNAL OF EDUCATIONAL COMMUNICATIONS AND TECHNOLOGY A Publication of the Illinois Association for Educational Communications and Technology Volume 4, Number 1 2010

Transcript of MJECT V4 N1C - wiu.edu

Page 1: MJECT V4 N1C - wiu.edu

MJECT

MIDWEST JOURNAL OF EDUCATIONAL COMMUNICATIONS

AND TECHNOLOGY

A Publication of the Illinois Association for Educational Communications and Technology Volume 4, Number 1 2010

Page 2: MJECT V4 N1C - wiu.edu
Page 3: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 1

The official publication of the Illinois Association for Educational Communications and Technology (IAECT).

IAECT Board of Directors President Seung-Won Yoon Treasurer Bradley Lipman Secretary Leaunda Hemphill

Board Members John Cowan, Bruce Harris, Scott Johnson, Norma Scagnoli, Tracey Stuckey, Richard Thurman Editor Richard Thurman Associate Editor Bruce Harris Editorial material published herein is the property of IAECT unless otherwise noted. Opinions expressed in MJECT do not necessarily reflect the official position of IAECT. Permissions: Copyrighted material from MJECT may be reproduced for noncommercial purposes provided full credit acknowledgement and a copyright notice appear on the reproduction. Trademarks: Product and corporate names may be trademarks or registered trademarks of their respective organizations. ISSN 1938-7709 (print version)

Research

2 Does the Hearing Status of Deaf

Educators Influence Technology Implementation Levels? - Becky Sue Parton - Mark Mortensen

15 Exploring How Media And

Learning Context Affects Transfer with Simulators - Samuel A. Helms

25 Providing Metacognitive Prompts in Online Courses - Bruce R. Harris - Reinhard W. Lindner

Promising Practices

29 Utilizing Digital and Online Resources as Cognitive Tools to Enhance Digital Literacy - Paul A.Schlag

Instructions for Authors

Instructions for authors for MJECT are available on the internet. Go to:

http://www.iaect.org and then click on Journal

MJECT Midwest Journal of Educational Communications & Technology

Illinois Association For Educational Communications & Technology

Page 4: MJECT V4 N1C - wiu.edu

2 MJECT Volume 4, Number 1

“This study sought to determine if deaf educators who are

deaf integrate technology at a

higher level on the Levels of Teaching

Innovation scale than their hearing

counterparts.”

n deaf education, computer-based technology is perhaps even more important than in

general education in terms of motivation and access (Milone, 2004; Roberson, 2001). The National Agenda (2005) established a set of priorities designed to improve the nature of deaf education programs with goal six being: “Technology must be made available for and used by deaf and hard of hearing students to enhance their communication and language opportunities, enlarge their educational options, increase cognitive and academic skills, and enrich their lives now and in the future” (p.32). The report elaborates that technology should be used to evaluate, analyze, solve problems, make decisions, and increase creativity outlets (The National Agenda, 2005).

Integrating technology into the curriculum, though, means more than just duplicating paper-and-pencil activities with new tools (Bober, 2003; Hughes & Zachariah, 2001; Staples, Pugach, & Himes, 2005; Walker & White, 2002). Unfortunately, most teachers were initially using technology only for that purpose. The Teachers’ Tools for the 21st Century: A Report on Teachers’ Use of Technology, 61% of teachers indicated that word processors and spreadsheets were the top-level computer activities performed (National Center for Education Statistics, 2000). Four years later, a study of over 3000 K-12 educators still found that teachers primarily used their classroom computer for email and grade reporting (Bebell, Russell, & O’Dwyer, 2004). And four more years later, a study in over 1000 schools across 11 states also found that teachers reported using computers primarily for word processing (83.9%)

I

Does the Hearing Status of Deaf Educators Influence Technology Implementation Levels? Becky Sue Parton

Mark Mortensen

Page 5: MJECT V4 N1C - wiu.edu

3 MJECT Volume 4, Number 1

and emailing (68.6%) (Teclehaimanot, B., 2008). In a separate study focusing on deaf educators,

Kluwin and Noretsky (2005) noted that all of the participants (n=38) reported word processing as their primary use of classroom computers. In a larger, national survey of almost 50,000 general education teachers from twenty states during the 2003-2004 school year, 23% were not using technology at all and another 66% were using it below the integration level, leaving only 11% who used technology as a tool for developing higher-order thinking skills (LoTi Technology Use Profile, 2004). Even teachers who were more comfortable with technology, frequently used computers for personal productivity, rather than student (Earle, 2002; Popson, 2005).

However, both national and state standards support the idea of pushing technology use beyond the routine and into a higher cognitive domain (Owen & Demb, 2004; Steffan, 2004). The International Society for Technology in Education (ISTE) details the standards that classroom teachers should achieve in the National Educational Technology Standards for Teachers (NETS*T) publication (NETS, n.d.). For deaf students, the innovative use of technology not only helps learning, but reduces isolation (Pillai, 1999; U.S. Department of Education, 2005). The Council on the Education of the Deaf (CED) accepted ISTE standards concerning technological proficiencies, and a technology strand was added to the annual convention of the Association of College Educators - Deaf/Hard of Hearing (ACE-D/HH) (U.S. Department of Education, 2003). Given these mandates and the associated grant and project incentives, it is therefore both a responsibility and an opportunity to identify ways to move technology

toward the desired integration level (Darroch & Castle, 2005).

There is a limited amount of data in the literature about classroom technology research in deaf education and even less on technology integration at the K-12 level (Popson, 2005; Roberson, 2001; U.S. Department of Education, 2005). A review of the

deafness-related dissertations for the years 2002-2004, as reported by the American Annals of the Deaf, resulted in 108 papers of which 10 were technology oriented. None of these ten dealt with technology integration levels in the K-12 environment.

Much of the existing research with regards to deaf educators’ use of technology is descriptive in nature, focusing primarily on individual projects, hardware/software availability, and barriers.

For example, Johnson (2004) related that a $2.1 million Preparing Tomorrow’s Teachers to Use Technology (PT3) grant was awarded to ACE-D/HH. As part of this grant, a virtual community of learners was established at www.deafed.net. Technology in Education Can Empower Deaf Students (TecEds) was another national initiative to train teachers of deaf and hard of hearing students to integrate technology into the instructional process (Milone, 2004). The two main goals were to improve technology skills of deaf educators and to train teachers to integrate those technology tools into their work with deaf students. As a result, two databases were established on line – one of which houses lesson plans and activities that integrate technology with a current repertoire of 95 modules (Mackall, 2002). And in regards to the previously mentioned Star Schools Project, technology integration did become a focus as participants saw how beneficial it could be especially in a bilingual classroom with a visual language (Nover, Andrews

“There is a limited amount of data in the literature about classroom technology research in deaf education and even less on technology integration at the K-12 level.”

Page 6: MJECT V4 N1C - wiu.edu

4 MJECT Volume 4, Number 1

& Everhart, 2001). For a more detailed review of additional deaf education technology initiatives, the interested reader is referred the primary author’s look at distance-based projects (Parton, 2005).

Although not the focus of this study, much has been written about the barriers to creating an environment conducive to technology integration which include time (Earle, 2002, Jacobsen, 2001; Norris & Soloway, 2000; Roberson, 2001; Wetzel, 2002), teacher attitude and technology availability (Alexiou-Ray, Wilson, Wright, & Peirano, 2003; Christensen, 2002; Knezek, Christensen, & Fluke, 2003), and administrator and program support (Kluwin & Noretsky, 2005; Pope, Hare, & Howard, 2002). Time, resources, and training, however, are first-order barriers to technology integration; attitudes, beliefs, and practices are second-order barriers (Earle, 2002). It may be that deaf educators and general educators share first-order barriers, but distinguish themselves in terms of second-order barriers since technology is arguably more intertwined in the daily lives of deaf people than hearing people. An excerpt from a Preparing Tomorrows Teachers with Technology (PT3) report highlights the rationale for this observation: “Faculty also note the increased frequency with which they work with deaf education colleagues from around the nation and how they are often perceived to be one of the more technologically proficient individuals within their own colleges” (U.S. Department of Education, 2005, Objective 1.2F). Additionally, of the seven communities that were involved in the Star Schools Project from the United Star Distance Learning Consortium (USDLC), the deaf education group out-performed all the others in terms of technology integration based on qualitative observations (Rodgers, 2003). Thus, although there is a general sense that deaf educators rely upon technology in a more sophisticated way than general educators, there are no empirical studies in the literature that describe that phenomena or whether it

exists. By identifying groups that are performing at deeper integration levels, it may be possible to extrapolate ways that other groups of educators might move beyond their current plateaus. Among deaf educators, there may also be a difference in how technology is used between teachers who are deaf and teachers who are hearing. “The Deaf Power movement heightened awareness of the relationship between deaf and hearing persons in the Deaf community. As a consequence, researchers began to look at differences between deaf and hearing teachers” (Marlatt, 2004, p. 349).

Description of the Study

This study sought to determine if deaf educators who are deaf integrate technology at a higher level on the Levels of Teaching Innovation (LoTi) scale than their hearing counterparts. The data analyzed in this study is a sub-set of survey data collected for a larger project comparing deaf educators and general educators (Parton, 2006). The null hypothesis was that there is no difference among the two groups.

Participants

Since researchers have studied the number of teachers using technology and shown that around 25% in recent years are at a non-use level (LoTi, 2004), this study sought to use participants who were likely to not fall within that category. Therefore, participants from a national, educational technology-focused conference were targeted. The investigators could thus pay particular attention to the depth of integration rather than the mere presence of technology or lack thereof. For the pool of deaf educators, the conference attendee list from the Instructional Technology and Education of the Deaf International Symposium was selected. The conference took place in the summer of 2005 on the campus of the National Technical Institute for the Deaf. Only attendees or students who were

Page 7: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 5

currently teaching in a K-12 classroom in the United States were invited to participate.

The Deaf community is a well-recognized minority population with its own language, culture, and beliefs (Zazove et all, 2004). However, for the purpose of this study a deaf person was anyone who had a hearing loss regardless of their affiliation with the hearing or Deaf community. A deaf educator was any K-12 teacher who worked with deaf students either in a residential school, a regional day school, or in a mainstream setting. This research used a convenience sample whereby conference attendees voluntarily self-selected into the study. All conference attendees were sent an email by the researcher, and those that responded formed the subject pool. A total of 44 deaf educators, of which 37 were female and 7 were male, participated in the study.

Instrument

Moersh (2002) indicates that there are several instruments available to measure technology usage and integration including Profiler, iAssessment, Mankata, and Levels of Teaching Innovation (LoTi). All these instruments have been aligned with the ISTE NETS*T. The percentage of questions that focus on integration rather than skills for the instruments are as follows: Mankato (3%), iAssessment (16%), Profiler (80%), LoTi (80%). The difference between the two types of questions can be summarized as:

Technology integration items focus on those statements involved with the use of computers in the classroom setting ranging from how computers are used to complement students’ understanding of the content to the manner in which computers are used to promote higher order thinking processes (Moersch, 2002, p.12).

Thus, the instrument chosen for this study was the LoTi Technology Use Profile, a nationally-validated assessment created in 1998 used to explore the role of technology in the classroom. It has been taken by over 180,000 educators across the United States and used in over 40 dissertations (Stoltzfus & Moersch, 2005). There are three measures within the questionnaire: 1) the LoTi scale 2) the Personal Computer Use (PCU) and 3) the Current Instructional Practices(CIP). The LoTi scale measures classroom technology integration within the following levels, associated implications, and example activities (LoTi, 2004): • Level 0 – Nonuse: no computers or computers are sitting idle • Level 1 – Awareness: computers used for teacher productivity or as a ‘digital babysitter’ • Level 2 – Exploration: students use computers for low cognition type activities • Level 3 – Infusion: students use computers to analyze data, make decisions, do research • Level 4a – Integration (Mechanical): students use computers to solve authentic problems • Level 4b – Integration (Routine): students initiate the use of computers to solve issues • Level 5 – Expansion: students use computers in complex ways beyond the classroom • Level 6 – Refinement: technology is fully integrated into instruction in a seamless way An extensive validation study of the Loti

Questionnaire was recently performed by Stoltzfus and Moersch (2005). They achieved content validity, meaning the survey measures what it claims to measure. For example, the LoTi has a high correlation with a standardized rubric showing the technology implementation complexity of teachers’ lesson plans (Stoltzfus & Moersch, 2005). Previously Griffin (2003) also reported, “Reliability measures from this study found the LoTi Questionnaire to have an overall reliability

Page 8: MJECT V4 N1C - wiu.edu

6 MJECT Volume 4, Number 1

coefficient of .94. . . [it] is a reliable instrument for measuring levels of technology use” (p. 62). It also incorporates the work of the Concerns-Based Adoption Model (CBAM), and aligns with the School Technology and Readiness (STaR) Chart (Moersch, 2001). The STaR Chart helps schools determine if they are making progress towards technology goals and measures the impact on student learning due to technology (Texas Star Chart, 2006). Procedures

A prepared email with a consent agreement, instructions, and a link to the on-line LoTi Questionnaire was sent by the conference organizer so that personal data was not given to the researcher. The survey included standard demographic questions along with two customized demographic questions. The email was sent after the conference was over thus

participants completed the surveys at home or at work. The total amount of time required to take the LoTi was approximately 20 minutes and there was no cost associated with taking it. Subjects automatically received feedback from the LoTi site indicating their current and target levels of integration along with lesson plans. After the

survey was administered to all subjects who chose to participate by March 2006, the researcher was electronically given all of the raw data from the LoTi administration office. Data Analysis

The description of the group was based on seven demographic items. Respondents were asked to report their hearing status, gender, age, years of teaching experience, teaching grade level, education level, and hours of technology training completed. The electronic nature of the submissions resulted in only complete surveys being accepted. Descriptive data also included frequencies, means, and sample sizes of the LoTi profile score. The .05 alpha level was applied to all data as the standard for

Frequency (deaf)

Frequency (hearing)

Frequency (All Deaf Ed)

Percent (Total)

Years < 5 0 4 4 9.1 5-9 2 2 4 9.1 10-20 4 6 10 22.7 > 20 4 22 26 59.1

Age 21-30 0 5 5 11.4 31-40 3 3 6 13.6

41-50 1 11 12 27.3 > 50 6 15 21 47.7

Table 1. Demographic Frequencies of Years of Teaching Experience and Age

Frequency

(deaf)

Frequency

(hearing)

Frequency

(Deaf Ed)

Percent

(Total)

Technology

Training

< 10 hrs 1 8 9 20.5

11-20 hrs 2 11 13 29.5

21-30 hrs 5 6 11 25.0

> 30 hrs 2 9 11 25.0

Table 2. Demographic Frequencies of Hours of Technology Training

Page 9: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 7

significance. An independent sample t-test was used to test the hypothesis.

A total of ten (22.7%) of the 44 deaf educators were deaf with the remaining 34 (77.3%) being hearing. There was a greater percentage of female subjects (n=37) than male subjects (n-7). Table 1 indicates the demographic frequencies of age and years of teaching experience.

Overall half of the subjects (n=22) reported teaching in an all-level environment and the majority held masters degrees (n=28). Table 2 reflects the amount of hours of technology training participants reported. The deaf educators were fairly evenly distributed among the groupings, however the hearing deaf educators had slightly less technology training than those who were deaf. Both groups had attended the same technology conference; this question reflects additional, previous training reported by the participants.

Table 3 describes the response of participants to

the question of what obstacle is the greatest barrier to integrating technology in the classroom. Almost half (47.7%) of respondents indicated ‘Time to Plan and Practice’ was the greatest barrier.

Table 4 shows the Level

of Technology Implementation (LoTi) profile score that participants

received. Only one reached the highest possible level of 6: Refinement. A LoTi score of 4b: Integration-Routine is considered to be the ‘Target Technology Level’ and suggests that the use of technology is focused on higher order thinking skills and real world learning activities (LoTi Technology Use Profile, 2004). A total of 6 deaf educators (13.64%) scored at or above this level.

Next, the hypothesis was tested using an independent samples t-test. Levene’s homogeneity of variances test statistic was .276, p>.05, which supports the assumption that the means were initially similar enough to conduct a comparison using

Obstacles Frequency

(deaf)

Frequency

(hearing)

Frequency

(Deaf Ed)

Percent

(Total)

None 0 4 4 9.1

Access to

Technology

1 4 5 11.4

Time to Plan &

Practice

6 15 21 47.7

Other Priorities 0 7 7 15.9

Lack of Staff

Development

3 4 7 15.9

Table 3. Demographic Frequencies of Greatest Obstacles to Technology Integration

Levels Frequency

(deaf)

Frequency

(hearing)

Frequency

(Deaf Ed)

Percent

(Total)

Level 0: Non-use 0 7 7 15.9

Level 1: Awareness 1 3 4 9.1

Level 2: Exploration 0 5 5 11.4

Level 3: Infusion 5 11 16 36.4

Level 4a: Integration

(Mechanical)

2 4 6 13.6

Level 4b: Integration

(Routine)

0 3 3 6.8

Level 5: Expansion 1 1 2 4.5

Level 6: Refinement 1 0 1 2.3

Table 4. Study Participants Classified by LoTi Scores and Type of Classrooms.

Page 10: MJECT V4 N1C - wiu.edu

8 MJECT Volume 4, Number 1

statistical procedures. The t-test statistic, (t=2.064, p <.05), indicated statistical significance between the means of the groups. Table 5 shows the t-test results.

The results illustrate statistically significant differences (p<.05) between the means of the deaf educators based on hearing status; therefore, the null hypothesis was rejected. The alternative hypothesis that there is a difference between the levels of technology integration of K-12 technology-minded deaf educators who are deaf and K-12 technology-minded deaf educators who are hearing was accepted. The Cohen’s d statistic was calculated to determine effect size (d=.775) and the results can be interpreted as having a large effect. Effect size deals with practical significance, and for independent samples t-tests, the Cohen’s d statistic is appropriate (Kotrlik & Williams, 2003).

Conclusions and Discussion

The mean scores of participants on the LoTi profile survey were statistically significantly higher for deaf educators who were deaf compared to deaf educators who were hearing. A discussion of the findings of this study should also be tied to the demographic data of the subjects. Specifically, the participants were selected from a population pool whereby teachers were presumably pre-disposed to using technology based upon their attendance at a technology training session in the form of a conference. Therefore, the results are not intended to reflect the performance of a typical K-12 educator, but be indicative of how technology-minded educators might perform.

Hearing Status N Mean Std. Deviation

Std. Error

Mean

Hearing 34 2.368 1.5682 .2689 Levels of Technology

Integration Deaf or HH 10 3.500 1.3540 .4282

Table 5a. Group Statistics.

Levene's Test

for Equality

of Variances t-test for Equality of Means

F Sig. t df

Sig. (2-

tailed)

Mean

Difference

Std. Error

Difference

95% Confidence Interval

of the Difference

Lower Upper

Levels of

Technology

Integration

Equal

Variances

Assumed

1.219 .276 -2.064 42 .045 -1.1324 .5485 -2.2394 -.0254

Table 5b. Independent Samples Test

Page 11: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 9

The study did have significant limitations. There was no attempt made to match up teachers who had similar teaching backgrounds on a one-to-one basis. This study may reflect, in part, differences inherent to instructors of particular subjects or learning environments. It also could be that technology exposure prior to the conference skewed the results in favor of the deaf educators who were deaf. Additionally, the survey was distributed in written English only; it is possible that deaf educators whose first language is ASL (American Sign Language) may have self-rated themselves differently if the survey had been presented in their primary language. Therefore, there were uncontrolled variables that might have influenced the results. Within the sample of deaf educators (N=44), the size of the sub-group of deaf educators who were deaf (N=10) was small; however, it accurately reflects the demographics of the entire population of deaf educators in the United States since most deaf educators are hearing. Therefore, in regards to the hypothesis, statistical significance would have been more difficult to establish. However, these findings are preliminary not conclusive due to the nature and size of the subject pool.

The data analysis must be kept in perspective. This preliminary study suggests deaf educators who are deaf are perhaps better able to integrate technology than deaf educators who are hearing. These results would seem to concur with the research of Marlatt (2004) who reflected that students found deaf educators who were deaf to be more effective overall than their hearing counterparts. However, Marlatt’s research was not directed at technology specifically and there is a clear lack of research in the literature that examines the issue of technology integration based on hearing

status in deaf education. Although the study did not investigate any causality for differences among the sub-groups, it can be speculated that factors such as deaf persons daily use of technology and group

member collaboration patterns might be influencing variables. Further research is needed to determine why deaf educators who are deaf appear to be more adept at integrating technology than

their hearing counterparts. When evaluated in terms of prior research

involving the LoTi instrument to determine levels of technology implementation, the findings of this study appear to suggest that technology-minded teachers, may be more successful integrators than typical instructors. The LoTi survey, as described earlier, was given to 47,955 K-12 teachers between July of 2003 and July of 2004. The percentage of teachers scoring at the ‘Target Technology Level’, a 4b, was 11% (LoTi Technology Use Profile, 2004). In contrast, the technology-minded subjects for the current study fell within the target level 13.64% for the deaf education participants. Further, the national survey showed that 59% of the participants were positioned in the LoTi 0-2 levels (LoTi Technology Use Profile, 2004). A much smaller number, 36.36% of deaf educators, were positioned in LoTi 0-2 levels for the current study. It is also noteworthy that almost half of both deaf and hearing deaf educators reported that ‘Time to Plan and Practice’ was their top barrier to further technology integration. These findings match closely to the findings of other researchers who have investigated the barriers preventing technology usage in the classroom (Earle, 2002; Jacobsen, 2001; Norris & Soloway, 2000; Roberson, 2001; Wetzel, 2002).

When technology is integrated into the curriculum rather than taught as a superficial tool, it has the potential to propel students to higher order

“This preliminary study suggests deaf educators who are deaf are perhaps better able to integrate technology than deaf educators who are hearing.”

Page 12: MJECT V4 N1C - wiu.edu

10 MJECT Volume 4, Number 1

thinking skills resulting in deeper comprehension (U.S. Department of Education, 2005). Currently, the typical K-12 educator in the United States is not a very sophisticated implementer of technology in the classroom (Griffin, 2003; Staples, Pugach, & Himes, 2005; Wetzel, 2002). Perhaps by identifying sub-groups within the teaching profession who integrate technology in a deeper, more meaningful way, it might be possible to recognize pivotal activities or practices associated with the group that leads to advanced technology implementation.

The primary practical implication of this study is to start building a picture of the current state of technology implementation among deaf educators. If further research could show a consistent, positive relationship between technology integration and deaf educators who are deaf that exceeds the general educators’ performance then perhaps efforts could be made to foster similar group dynamics among non-deaf educators.

Accordingly, the following four recommendations have been made for further research regarding the relationship of deaf educators and levels of technology implementation: 1. Conduct a study that uses the LoTi

instrument to survey typical deaf educators, both deaf and hearing, rather than those who are specifically targeted as technology-minded. Investigate how the results of that study compare to the United States national profile report on general educators.

2. Conduct a study that places deaf educators into groups based on additional variables such as type of instructional setting, i.e.

bilingual/bicultural deaf schools or self-contained classrooms for the deaf. Investigate whether the instructional setting, as opposed to the hearing status of the instructor, influences the technology integration practices.

3. Explore the reasons why the deaf educators who were deaf appeared to integrate technology

at a higher level than deaf educators who were hearing. Identify group demographics that may contribute to the phenomena.

4. Administer the LoTi survey questionnaire in American Sign

Language (ASL) to deaf educators, who are deaf and more comfortable communicating in their primary versus secondary language. By investigating technology use among deaf

educators through quantitative measures, it is hoped that patterns will start to emerge so that research can be directed towards discovering and sharing best practices. If the unique attributes of deaf educators who are deaf can be captured, then perhaps they can serve as models for other sub-groups. Perhaps also, time and financial resources can be allocated in a more effective way so as to optimize limited funds and training occasions. References Alexiou-Ray, J., Wilson, E., Wright, V., & Peirano,

A. (2003). Changing instructional practice: The impact of technology integration on students, parents, and school personnel. Electronic Journal for the Integration of Technology in Education, 2(2). Retreived September 28, 2005 from http://ejite.isu.edu/Volume2No2/AlexRay.htm

“If further research could show a consistent, positive relationship between technology integration and deaf educators who are deaf that exceeds the general educators’ performance then perhaps efforts could be made to foster similar group dynamics among non-deaf educators.”

Page 13: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 11

Bebell, D., Russell, M., & O’Dwyer, L. (2004). Measuring Teachers’ Technology Uses: Why Multiple-Measures Are More Revealing. Journal of Research on Technology in Education, 37(1).

Bober, M. (2003). Using technology to spark reform in preservice education. Contemporary Issues in Technology and Teacher Education, 3(2), 205-222.

Christensen, R. (2002). Effects of technology integration on the attitudes of teachers and students. Journal of Research on Technology in Education, 34(4), p. 411-433.

Darroch, K. & Castle, N. (2005, June). Using technology successfully. Paper presented at the Instructional Technology and Education of the Deaf International Symposium, Rochester, NY.

Dodd, E., & Scheetz, N. (2003). Preparing today’s teachers of the deaf and hard of hearing to work with tomorrow’s students: A statewide needs assessment. American Annals of the Deaf, 148(1), p. 25-30.

Earle, R. (2002). The integration of instructional technology into public education: Promises and challenges. ET Magazine, 42(1), p. 5-13. Retrieved August 21, 2005, from http://bookstoread.com/e/et

Griffin, D. (2003). Educators’ technology level of use and methods for learning technology integration. A dissertation presented to the University of North Texas.

Hughes, M. & Zachariah, S. (2001). An investigation into the relationship between effective administrative leadership styles and the use of technology. International Electronic Journal for Leadership in Learning, 5(5). Retrieved August 2, 2005, from http://www.ucalgary.ca/~iejll/volume5/hughes.html

Jacobsen, D. (2001, April). Building different bridges: Technology interation, engaged student learning, and new approaches to professional

development. Paper presented at the 82nd Annual Meeting of the American Educational Research Association, Seattle, WA.

Johnson, H. (2004). U.S. Deaf education teacher preparation programs: A look at the present and a vision for the future. American Annals of the Deaf, 149(2), p. 75-91.

Knezek, G., Christensen, R., & Fluke, R. (2003, April). Testing a will, skill, tool model of technology integration. Paper presented at the annual meeting of the American Educational Research Association, Chicago, Illinois.

Kluwin, T. & Noretsky, M. (2005). A mixed-methods study of teachers of the deaf learning to integrate computers into their teaching, American Annals of the Deaf, 150(4), p.350-357.

Kotrlik, J. & Williams, H. (2003). The incorporation of effect size in information technology, learning, and performance research. Information Technology, Learning, and Performance Journal, 21(1), p. 1-7.

LoTi Technology Use Profile. (2004). United States national profile. Retrieved June 20, 2005, from http://loticonnection.com

Mackall, P. (2002). TecEds Survey. Laurent Clerc National Deaf Education Center. Retrieved July 7, 2005 from http://clerccenter.gallaudet.edu/TecEds/evaluation/2002-Survey/TecEds2002Survey.htm

Marlatt, E. (2004). Comparing practical knowledge storage of deaf and hearing teachers of students who are deaf or hard of hearing. American Annals of the Deaf, 148(5), p. 349-357.

Milone, M. (2004). EdTech leader of the year 2004. Technology and Learning, 25(5), p.7

Moersch, C. (2001). Next steps: Using LoTi as a research tool. Learning & Leading with Technology, 29(3), p. 22-27.

Moersch, C. (2002). Measurers of success: Six instruments to assess teachers’ use of

Page 14: MJECT V4 N1C - wiu.edu

12 MJECT Volume 4, Number 1

technology. Learning and Leading with Technology, 30(3), p. 10-13,24-28.

National Center for Education Statistics (2000). Teachers’ tools for the 21st century: A report on teachers’ use of technology. Retrieved March 1, 2006 from http://nces.ed.gov/pubsearch

NETS (n.d.) Integration technology in general education. NETS for teachers: preparing teachers to use technology. Retrieved September 20, 2005 from http://cnets..iste.org.

Norris, C. & Soloway, E. (2000). The snapshot survey service: A web site for assessing teachers’ and administrators’ technology activities, beliefs, and needs. Retrieved August 20, 2005 from http://www.ed.gov/rschstat/eval/tech/techconf00/cathienorris.pdf .

Nover, S., Andrews, J., & Everhart, V. (2001). Critical pedagogy in Deaf education: Teachers’ reflections on implementing ASL/English bilingual methodology and language assessment for Deaf learners. USDLC Star Schools Project Report No. 4.

Owen, P. & Demb, A. (2004). Change dynamics and leadership in technology implementation. The Journal of Higher Education, 75(6), p. 636-666.

Parton, B. (2005). Distance education brings Deaf students, instructors, and interpreters closer together: A review of prevailing practices, projects, and perceptions. International Journal of Instructional Technology & Distance Learning, 2(1), p.65-75.

Parton, B. (2006). Technology adoption and integration levels: A comparison study between technology-minded general educators and technology-minded deaf educators. A dissertation presented to the University of North Texas.

Pillai, P. (1999). Using technology to educate deaf and hard of hearing children in rural Alaskan

general education settings. American Annals of the Deaf, 144(5), p. 373-378.

Pope, M., Hare, D., & Howard, E. (2002). Technology integration: Closing the gap between what preservice teachers are taught to do and what they can do. Journal of Technology and Teacher Education, 10(2), p. 191-203.

Popson, S. (2005, February). Join together. Paper presented at the Association for College Educators – Deaf and hard-of-hearing (ACE-D/HH), Banff, Canada.

Roberson, L. (2001). Integration of computers and related technologies into deaf education teacher preparation programs. American Annals of the Deaf, 146(1), p. 60-66.

Rodgers, B. (2003, March). Evaluation eportfolio DVD-ROM for star schools engaged learning project. Paper presented at the Society for Information Technology and Teacher Education, New Mexico.

Staples, A., Pugach, M., & Himes, D. (2005). Rethinking the technology integration challenge: Cases from three urban elementary schools. Journal of Research on Technology in Education, 37(3), 285-311.

Steffan, R. (2004). Navigating the difficult waters of the no child left behind act of 2001: What it means for education of the deaf. Amerian Annals of the Deaf, 149(1), p. 46-50.

Stoltzfus, J. & Moersch, C. (2005, April). Construct validation of the level of technology implementation (LoTi) survey: A preliminary analysis. Paper presented at the MICCA Conference, Maryland.

Teclehaimanot, B. (2008, January). Teachers’ perceptions of technology use in the classroom. Paper presented at the World Conference on Educational Multimedia, Hypermedia, and Telecommunications (Ed-Media), Chesapeake, Virginia.

Page 15: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 13

Texas Star Chart (2006). The Texas StaR Chart. Retrieved October 9, 2005 from http://starchart.esc12.net.

The National Agenda. (2005, April). Moving forward on achieving educational equality for deaf and hard of hearing students. Printed by the Texas School for the Deaf.

U.S. Department of Education. (2003). PT3 Grant performance report (No. P342A-P342B-000034). Retreived July 29, 2005 from http://www.deafed.net/activities/CatalystReptYr3.htm

U.S. Department of Education. (2005). Join together: A nation wide on-line community of practice & professional development (OMB Publication No. 1890-0004). Kent, OH: Author.

Walker, T., & White, C. (2002). Technorealism: The rhetoric and reality of technology in teacher education. Journal of Technology and Teacher Education, 10(1), p. 63-74.

Wetzel, D. (2002). A model for pedagogical and curricular transformation with technology. Journal of Computing in Teacher Education, 18(2), p. 43-49.

Zazove, P., Meador, H., Derry, H., Gorenflo, D., Burdick, S., & Saunders, E. (2004). Deaf persons and computer use. American Annals of the Deaf, 148(5), p. 376-384

Page 16: MJECT V4 N1C - wiu.edu

14 MJECT Volume 4, Number 1

Midwest Journal of Educational Communications and Technology

Call for Papers The Midwest Journal of Educational

Communications and Technology (MJECT) is the official journal of the Illinois Association for Educational Communications and Technology. The journal focuses on issues, research, and innovations that relate to the improvement of teaching and learning through the effective use of media, technology, and telecommunications. It provides a venue for professional development and a voice for those interested in promoting the use of educational communications and technology.

This peer-reviewed journal publishes articles which contribute to our understanding of issues, problems, practices, trends, and research associated with instructional technology at all levels of education, government, business and industry. It is published twice each year as fall and spring issues.

MJECT publishes original articles in three areas:

Research Articles Qualitative and quantitative manuscripts presenting

original research on practices or policies related to educational technology, instructional technology, or new media. Adherence to standard APA formatting for the presentation of empirical studies is expected with an anticipated limit of 10-15 pages.

Promising Practices This section consists of original presentations of

policies or instructional practices that provide meaningful advancements in educational communications. Conceptual reviews of policies and practices relevant to educational communication that encourage dialogue between practitioners will also be considered. These brief articles (10 pages or less) must connect the targeted strategy or practice to a theoretical base and provide explicit discussion that promotes generalization to the field.

Open Forum The editors invite original book reviews, publishable

interviews with scholars, and similar contributions. Contact the Editor with your proposed contributions for guidance.

Call for Manuscripts MJECT is currently seeking submissions for

forthcoming issues that will focus on the three themed sections. To have a manuscript considered fort publication, please send the completed manuscript directly to the MJECT editor as an email attachment. Please specify in the email message which of the three themed areas the manuscript should be considered for publication. The email should identify the title of the manuscript and provide full contact information (name, title, affiliation, address, phone, email for corresponding with the author). Further submission guidelines are available by contacting the MJECT Editor.

Call for Reviewers Manuscript reviewers are sought in the field of

educational communications / instructional technology. Reviewers should be current members of IAECT. Review correspondence is accomplished by email and a typical review is expected to be completed within 10 days of assignment. This is an excellent opportunity for educators and instructional technologist to serve the profession. More information on MJECT can be found at www.iaect.org. If you would like to help MJECT by reviewing manuscripts please direct an inquiry to the Editor. Send inquiries and/or articles to: Richard Thurman Editor, MJECT Email: [email protected]

Page 17: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 15

Exploring How Media and Learning Context Affects Transfer with Simulations Samuel A. Helms

“. . . the takeaway for educations

using simulations in the classroom is

to constantly compare the

simulation with its reference system.

This should emphasize the

similarities between the two

and help students contextualize the

information as relevant to both the simulation and the reference system.”

ne of the main goals of using simulations for education is for the learners

to transfer the information learned in the simulation to the real world. However, performance in the simulation is no guarantee of performance in the real situation (Charsky, 2004). The task remains up to researchers to identify how to encourage transfer of learning from the simulation to the real situation.

In the case of using simulations, the objective is often to have learners transfer what they learn from using the simulation to the reference system. The reference system of a simulation is the real world process or situation the simulation attempts to mimic (Asakawa, & Gilbert, 2003; Helms, 2009; Jacobs & Dempsey, 1993; Peters, Visser, & Heijne, 1998). For example, the object of using a flight

simulator (the simulation) is to have the pilot learn to fly a real plane (the reference system).

As examined by Charsky (2004), learners who perform well in a game or simulation do not necessarily transfer their performance and learning to the reference system. Helms (2009) envisions a three-step model for using simulations in the classroom: briefing, engagement, and debriefing. The briefing consists of the pre-instructional activities used before the learner uses the simulation. Engagement is when the learner uses the simulation for the purposes of learning. The debriefing is a review of the experience and actions taken by the learner during engagement.

This study examined two possible variables that may influence transfer: 1) the media of information delivery and, 2) how the information was contextualized in the simulation and reference system. To examine these

O

Page 18: MJECT V4 N1C - wiu.edu

16 MJECT Volume 4, Number 1

issues, six aquariums were used as simulations of aquatic ecosystems in a high school science classroom. Transfer

In the literature, there are several recommendations for encouraging transfer of learning. Among these recommendations are teaching in multiple contexts (Alexander & Murphy, 1999; Anderson, Reder, & Simon, 1996), increasing the similarities between the original learning situation and the transfer situation (Alexander & Murphy, 1999), showing students the potential for transfer (Billing, 2007), and when learning takes place in a social context (Hatano & Greeno, 1999; Perkins & Salomon, 1989).

It is often noted that learning is situated in the context in which it was originally learned (Anderson et al., 1996; Billing, 2007; Sternberg & Frensch, 1993). Learners tend to apply knowledge and skills to situations that are the same as this original learning context, thereby not transferring the knowledge and skills to situations that differ from this context. It is argued that “…knowledge is more context-bound when it is just taught in a single context” (Anderson et al., 1996, p. 6). Therefore, how the material is learned will affect the potential for transfer later on (Billing, 2007).

To increase the potential for transfer, the original learning context should be broadened (Alexander & Murphy, 1999; Anderson et al, 1996; Billing, 2007). Providing a broader context of learning includes providing multiple examples (Anderson et al., 1996), making comparisons between different contexts (Alexander & Murphy, 1999), and showing the learning material in different contexts. The reverse of this has also been demonstrated in the research: “knowledge that is overly contextualized can reduce transfer…” (Bransford, Brown, & Cocking, 1999, p. 53).

Learners should also be shown that the information can transfer from one situation to another. “The learner must understand the conditions of application – when what has been learned can be used” (Billing, 2007, p. 793, emphasis in original). In this way, students will know that the learning material applies in multiple contexts, thus broadening the context in which it is learned. It also encourages the learners to see the potential for transferring the learning material into other contexts (Alexander & Murphy, 1999).

Lastly, transfer appears to be increased when learning takes place in a social context (Hatano & Greeno, 1999; Perkins & Salomon, 1989). The social context of learning provides an environment where “justifications, principles, and explanations are socially fostered, generated, and contrasted” (Perkins & Salomon, 1989, p. 22). This environment, in turn, seems to help learners apply the learning material to multiple situations.

In light of this research, one would expect these transfer predictors to hold true when using simulations. However, how the original context of learning influences how that learning is transferred when using a simulation has not been specifically examined. The purpose of this study was to qualitatively examine this issue.

Method

This study took place at a charter high school in a suburb of Denver, Colorado. The school was located in a middle-class district with high school population of 200 students. From a potential pool of 15 students, six participants agreed to be in the study. All six participants were white males between the ages of 18-19 and enrolled in a summer school course. It was explained to the participants that they did not have to participate in the study to complete the course, and that their grade in the course would not be influenced by participating.

Page 19: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 17

The participants all expressed interest in aquatic ecosystems and aquariums before the study began. The instructor for the course had aquariums set up in the classroom for a month before participant recruiting began. The instructor told the participants that they would set up their own aquarium and be able to take it home with them at the end of the study: Observations indicated this was a motivator for the participants.

Data were collected for three months, or one summer semester at the high school. Data were collected using a digital voice recorder, photographs, and fieldnotes. Audio was recorded for the entire class period, five days per week, to gather as much data as possible. Photographs were taken of the participants’ aquariums. Fieldnotes were written by the researcher during the class periods and triangulated with the instructor’s observations after each day for accuracy and trustworthiness.

The study proceeded according to the three stage model of briefing, engagement, and debriefing presented by Helms (2009). The briefing consisted of all the pre-instructional activities presented to the participants before they began using the simulations for the purposes of learning. Some briefing activities included lectures, a tour of the Denver aquarium, a tour of a local fish store, internet research, and discussions. Each participant was told they had to choose an ecosystem and set up their aquarium to simulate that ecosystem. The animals, plants, and decorations they chose had to be consistent with their chosen ecosystems.

Engagement with the simulation began when students built their aquariums and added animals of their choice. They were allowed to add any fish, invertebrates, and corals that were approved by the instructor. The participants worked on their own with guidance from the researcher and the instructor as needed. During this time, the participants had access to a computer lab, books on aquariums, and a

podcast about marine aquariums (http://www.talkingreef.com).

The debriefing took place on the last day of the study and followed the models of Steinwachs (1992), Lederman (1992), and Petranek (1994), and took place in three phases. In the first phase, the participants recalled what they did during the simulation and why they took the actions that they did. The second phase asked the participants to discuss hypothetical scenarios (e.g. “What would happen if you added too much food to the aquarium?”). In the third phase the participants applied what they learned in the simulation to aquatic ecosystems. For example, they would discuss what would happen if too many nutrients were added to a river system.

Analysis

In the present study, the reference system (RS) was defined as aquatic ecosystems commonly harvested for aquarium fish. The simulations were four freshwater and two marine aquariums set up by the participants. These aquariums mimicked the processes of aquatic ecosystems.

First, instances of knowledge, facts, and processes given by the instructor or the researcher were identified in the transcripts and field notes. Next, these instances were coded based on how the information was given to the students. The codes identified if the information was given (a) in the context of the simulation only, (b) in the context of the RS only, (c) by comparing the simulation to the RS, or by (d) comparing the RS to the simulation. Notes were then appended to each coded item identifying how the information was given: lecture, video, hands-on activity or lab, discussion, demonstration, or research by the participant.

Finally, the data were coded to identify instances where the participants demonstrated understanding of the facts and skills they were supposed to learn. Similar to the first coding method, these second

Page 20: MJECT V4 N1C - wiu.edu

18 MJECT Volume 4, Number 1

category of codes indicated whether the students demonstrated understanding in (a) the context of the simulation, (b) the context of the RS, or (c) demonstrated understanding of a concept not explicitly given to them by the instructor or the researcher.

These two code groups of codes were then compared for evidence of transfer. The researcher used Jacobs and Dempsey’s (1993) definition of transfer: “the continued appropriate responding of a learner in situations that differ from the original learning context” (p. 212). In this case, the researcher looked for evidence that the participants learned the information in one context (the simulation or the RS) and applied it in a different situation. The transcripts were triangulated with the observations and the photographs to code instances of transfer. For example, if the instructor gave a lecture about the biological alteration of ammonia to nitrogen in the context of the aquarium, and a participant said during the debriefing how this process would affect river ecosystems, it was coded as transfer. This produced three codes:

S RS: Information was given in the context of the

simulation (S) and demonstrated in the context of the reference system.

RS S: Information was given in the context of the reference system and participants demonstrated understanding of it in the context of the simulation.

None: the information was given in the same context in which it was received, indicating no transfer occurred.

The two codes of transfer, RS S and S RS,

were used as the dependant variable in the analysis. Since transfer from one context to another was the object of the experiment, evidence of this transfer is the data being measured.

Results Table 1 displays how the information was given

versus the type of transfer of that knowledge. This table was created to examine the potential effects of information delivery method on the type of transfer demonstrated by the participants. This table compares the direction of transfer (columns) with the teaching method used to deliver the content (rows); recording the number of instances the combination occurred in the data. For example, there were five instances where information was given to the participants in a lecture format, and the participants transferred this same information from the reference system to the simulation. This table was created to examine how the context of the information to be delivered affected how participants transferred it later.

Demonstrated

RS S S RS None

Lecture 5 6 5

Video 2 2 2

Hands-on 4 6 3

Discussion 5 2 3

Demonstration 4 6 1

Lea

rnin

g M

etho

d

Research 3 5 5

Table 1. Teaching Mode versus Demonstration of Learning Learning Method and Transfer

Based on the coded data, the researcher first looked to see if different instructional strategies in the briefing had any noticeable effect on transfer. Several methods of instruction were used including expository lectures, demonstrations, hands-on activities and labs, a video, group discussions, and collaborative and independent research by the participants.

Page 21: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 19

For example, coral bleaching was only discussed in an expository fashion; the students were told what coral bleaching is and what causes it in the context of coral reef ecosystems. Yet, Starbuck and Stubb demonstrated transfer of this knowledge back to their aquariums later the next day when choosing what corals to add to their aquarium. Thus they transferred knowledge given to them in one context (aquatic ecosystems, the RS) to another context (their simulations, yielding: RS S).

The data in Table 1 were analyzed using a chi-square and found no significant differences (x2 = 5.34, p = .867). The data suggest that the method of instruction had no statistically significant effect on transfer.

Learning Context and Transfer

Next, the codes for how the information was contextualized for the participants were compared with evidence of transfer. The learning context was determined by how the information was linked between the simulation and the RS. This was then compared to how the participants transferred, or did not transfer, this information. Table 2 shows the results of this comparison with the cells indicating the number of coded instances. The rows in the table are as follows:

“S” indicates the information was connected only to

the simulation and not to the reference system (aquatic ecosystems).

“RS” indicates the information was connected only to the reference system and not to the simulation.

“S RS” indicates information connected to the reference system through the simulation. Such a statement might be, “If you put a predatory fish in your aquarium and it eats another fish, this is like the food web in the marine ecosystem.”

“RS S” indicates information presented in the reference system first related to the simulation. For example, “So, the waves crashing around on

the shore mix oxygen into the water, in your aquarium this is done with an aerator.”

“Not given” indicates information not explicitly given by the researcher or the instructor at any point during the study. It is likely the participant researched it on his/her own or knew the information before the study began.

Demonstrated

RS S S RS None

S 0 1 16

RS 3 0 0

S:RS 13 5 0

RS:S 12 15 0 Lea

rnin

g C

onte

xt

Not given 0 0 4

Table 2. Learning Context of Information versus Demonstration of Learning

This table was analyzed using a chi square and found significant differences (x2 = 72.6, p < .001). The data suggest how the information was contextualized initially had a statistically significant effect on whether or not the information was transferred. Looking at the table, it appears that information given to the students in the context of the simulation only rarely resulted in any transfer later on. However, when information was given in the context of the simulation and connected to the RS, participants were likely to transfer this information from the RS back to the simulation. The same appears to be true for information given in the context of the RS and connected to the simulation: participants were more likely to transfer this information later on.

Conclusions

It appears that the delivery method of the information has no affect on whether or not this

Page 22: MJECT V4 N1C - wiu.edu

20 MJECT Volume 4, Number 1

information will be transferred to a different context. In this case, it can be argued that the learning is situated in the context in which it was originally learned, regardless as to the method of learning. Even when participants researched the concepts on their own, there was no evidence that this would encourage transfer to new situations. However, how the learning was contextualized was a significant predictor of how well students transferred the information. If a direct link was established between the simulation and the RS, the students had a greater likelihood of transferring the information.

Based on the data in Table 2, the best predictor that participants will demonstrate transfer is when the knowledge is linked between the simulation and the RS. Participants did not demonstrate they recalled or understood knowledge if it was connected only to the simulation. This included discussions about carbonate hardness, specific gravity, live rock and live sand, and the purpose of dechlorinators, among other topics. Although the instructor and/or the researcher mentioned the importance of all these factors to the students at one or more points in the study, they were all discussed only as they related to aquariums, not as they related to aquatic ecosystems. While using the simulation, and during the debriefing, students did not demonstrate any understanding of these topics.

Additionally, the instructional methods used were different for each category of knowledge the students failed to recall or demonstrate. This is consistent with the findings described above where the reverse was true: participants’ successful recollection of information seemed unrelated to the method of instruction. Carbonate hardness was presented in an expository lecture and students practiced testing and interpreting data in the lab. Specific gravity was presented to the participants using the same methods. Live rock and live sand were presented only in lecture formats but at multiple times. Finally, dechlorinators were also

presented in a lecture and practice format, albeit only once. Regardless of the method, participants did not seem to transfer any of this understanding to a different situation.

Looking across the rows “S:RS” and “RS:S” in Table 2 one can see that knowledge presented in either of these formats resulted in the greatest transfer. Also, knowledge given from the RS to the simulation (RS:S) was more often demonstrated by the participants than when the knowledge was presented in the reverse order (S:RS). The most common example of this was in regard to how learners set up their tanks and what fish they could add. Consider the following transcription of a discussion the instructor had with the students about the food web:

Instructor: “But the idea is that you saw that in salt

water ecology they got these critters that produce energy, their own food, from the sun. That’s always algae, like in a marine environment. Then you got these critters, and there’s a whole different variety of them, that eat those: so those would be your primary consumers. The secondary consumers would be the predators, and of course what eats the predators…is when they die it would be some kind of scavenger or decomposer.”

Researcher: “And you know what occurred to me too. The predator in an aquarium anyway does not necessarily mean it is going to eat the rest of the fish. Because that shark I have in mine is a predator, technically, but it is way too small to eat the other fish. So it’s a predator, but like the flake food is made of fish, so they eat that. Because when you think predator, I always think like piranha, but I guess it could be something more simple, like a clown fish even I guess. It’s not as cool as a piranha…”

Page 23: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 21

The lesson on the food web (producers, primary consumers, secondary consumers, decomposers) was first presented by the instructor in relation to aquatic ecosystems only (i.e. the RS only). The researcher then applied what the instructor said to the aquariums and established a direct link between the two (RS:S).

However, the data in this study suggest when participants gather knowledge in order to become better users of the simulation, they are also more likely to transfer this knowledge and experience back to the RS. This conclusion is supported by two facts. First, delivering knowledge in the RS:S format resulted in far greater demonstrations of knowledge in the later phases than delivering knowledge in connection to the RS or simulation alone. This implies that a key factor in determining how much participants infer from the simulation to the RS depends on the connection of the RS to the simulation when the knowledge is first presented.

Discussion

This study examined transfer of information using a simulation of aquatic ecosystems. The results of the study indicated that the method of information delivery had no effect on the likelihood of transfer by the participants. The biggest predictor of transfer was whether or not the information was directly connected between the simulation and the RS. Based on the literature regarding transfer, the results of this study are not surprising. The results do, however, suggest a recommendation for teaching when using simulations.

Comparing the RS and the simulation likely increased transfer for several reasons. First, providing a direct comparison between the

simulation and the RS when the information was first learned contextualized the information in such a way as to allow for a greater likelihood of transfer. In other words, the information was learned in the context of two situations, rather than just the

situation of the simulation. Second, comparing the simulation and the RS may have emphasized the similarities between the two. By seeing more of the similarities, students were more likely to transfer relevant information. Third, the relationship between the

simulation and the RS was such that transfer was encouraged.

The context in which the original information is learned matters for transfer. The more situated the learning, the less likely the student is to transfer the information to a new context (Anderson, Reder, & Simon, 1996; Billing, 2007; Bransford, Brown, & Cocking, 1999; Hatano & Greeno, 1999; Sternberg & Frensch, 1993). By comparing the simulation to the RS, the information was contextualized in both situations. This is supported by the finding that when the information was given in the context of the simulation only, it was rarely transferred to the RS. In this situation, the information was too contextualized in one specific situation to transfer. Comparing the simulation and the RS may have broadened the context and allowed for easier transfer.

Additionally, comparing the simulation and the RS may have emphasized the similarities between the two situations, encouraging students to see the potential for transfer. It is known that “transfer depends as well on ability to perceive the affordances for the activity that are in the changed situation” (Greeno, Moore, & Smith, 1993, p. 102). Seeing the potential for transfer between two

“The results of the study indicated that the method of information delivery had no effect on the likelihood of transfer by the participants. The biggest predictor of transfer was whether or not the information was directly connected between the simulation and the RS.”

Page 24: MJECT V4 N1C - wiu.edu

22 MJECT Volume 4, Number 1

situations increases the likelihood of transfer (Anderson, Reder, & Simon, 1996; Catrambone, & Holyoak, 1989). Thus, comparing the simulation and the RS should have increased the instance of transfer by showing students the similarities between the two situations, which was the result in this study.

The type of transfer that occurred in this study can be categorized as near, vertical, and low-road, because the simulations were similar to the reference system, were considered to be a part of the reference system, and the participants did not generalize their knowledge and apply it to diverse situations (Perkins & Salomon, 1987; Simons, 1999). When the simulation and RS were used in conjunction to present information, the participants did not have to extrapolate the information to apply it in a different situation. In essence, it was easy for the participants to transfer. This may be another effect of comparing the simulation and the RS so deliberately. It may make the type of transfer that needs to occur easier for the learners.

Future research could examine how the information presented in a briefing (Helms, 2009) influences transfer during engagement and the debriefing. This would help educators construct the briefing so that the learners have the greatest amount of transfer. Additionally, that the media of delivery had no noticeable effect on transfer should be examined further given the limited generalizability of this study. This result may not hold true in a controlled experimental setting.

Based on the results of this study, the takeaway for educations using simulations in the classroom is to constantly compare the simulation with its reference system. This should emphasize the similarities between the two and help students contextualize the information as relevant to both the simulation and the RS. Educators should avoid presenting knowledge in the context of the simulation only, as this information will not likely transfer to performance in the RS.

References Alexander, P.A., & Murphy, P.K. (1999). Nuturing

the seeds of transfer: A domain-specific perspective. International Journal of Educational Research, 31(7), 561-576.

Asakawa, T., & Gilbert, N. (2003). Synthesizing experiences: Lessons to be learned from internet-mediated simulation games. Simulation and Gaming, 34(1), 10-22.

Anderson, J.R., Reder, L.M., & Simon, H.A. (1996). Situated learning and education. Educational Researcher, 25(4), 5-11.

Billing, D. (2007). Teaching for transfer of core/key skills in higher education: Cognitive skills. Higher Education, 53, 483-516.

Bransford, J.D., Brown, A.L., & Cocking, R.R. (1999). How people learn: Brain, mind, experience and school. Washington D.C.: National Academy Press.

Catrambone, R., & Holyoak, K.J. (1989). Overcoming contextual limitations on problem-solving transfer. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(6), 1147-1156.

Charsky, D.G. (2004). Evaluation of the effectiveness of integrating concept maps and computer games to teach historical understanding (Doctoral dissertation). Retrieved from http://0-proquest.umi.com. source.unco.edu April, 20 2009.

Greeno, J.G., Moore, & Smith (1993). Transfer of situated learning. In Dettrman, D.K., & Sternberg, R.J. (Eds.), Transfer on trial: Intelligence, cognition, and instruction (pp 99-167). Norwood, NJ: Ablex.

Hatano, G., & Greeno, J.G. (1999). Commentary: Alternative perspectives on transfer and transfer studies. International Journal of Educational Research, 31(7), 645-654.

Page 25: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 23

Helms, S. (2009). Punk rock fish: Applying a conceptual framework of simulations in a high school science classroom. TechTrends, 53(3), 49-53.

Jacobs, J.W., & Dempsey, J.V. (1993). Simulation and gaming: Fidelity, feedback and motivation. In J.V. Dempsey & G.C. Sales (Eds.), Interactive instruction and feedback. Englewood Cliffs, NJ: Educational Tech-nology.

Lederman, L.C. (1992). Debriefing: Toward a systematic assessment of theory and practice. Simulation & Gaming, 23(2), 145-160.

Perkins, D.N., & Salomon, G. (1987). Transfer and teaching thinking. In Perkins, D.N., Lochhead, J., & Bishop, J. (Eds.), Thinking: The Second International Conference (pp. 285-231). Hillsdale, NJ: Erlbaum.

Perkins, D.N., & Salomon, G. (1989). Are cognitive skills context-bound? Educational Researcher, 18(1), 16-25.

Peters, V., Visser, G., & Heijne, G. (1998). The validity of games. Simulation & Gaming: An International Journal, 29, 20-38.

Petranek, C.F. (1994). A maturation in experiential learning: Principles of simulation and gaming. Simulation and Gaming, 25(4), 513-523.

Simons, P.R.J. (1999). Transfer of learning: Paradoxes for learners. International Journal of Educational Research, 31(7), 577-589.

Steinwachs, B. (1992). How to facilitate a debriefing. Simulation and Gaming, 23(2), 186-195.

Sternberg, R.J., & Frensch, P. A. (1993). Mechanisms of transfer. In Dettrman, D.K., & Sternberg, R.J. (Eds.), Transfer on Trial: Intelligence, Cognition, and Instruction (pp. 22-38). Norwood, NJ: Ablex.

Page 26: MJECT V4 N1C - wiu.edu

24 MJECT Volume 4, Number 1

Providing Metacognitive Prompts in Online Courses Bruce R. Harris Reinhard W. Lindner

“The purpose of this paper is to

report the results of an exploratory

study that investigated the

effects of providing metacongitive prompts in the

instructional modules of an

online course.”

s a result of recent advanced technologies, online learning has become a

major form of learning and teaching around the world. More and more instructors are converting their face-to-face classes to an online course format (Allen & Seaman, 2007).

From our experience in teaching online courses over the last 15 years, generally speaking, completing an online course requires learners to take more responsibility for their learning than face-to-face courses. Successful online learners must be more self-directed and self-regulated than those in traditional courses. The nature of online courses involves independent learning that requires students to be highly self-regulated (Harris, Lindner, & Pina, in press).

It is generally held that the construct of self-regulated learning consists of three parts: metacognition, cognitive strategy

use, and motivation (Zimmerman & Martinez-Pons, 1986). Metacognitive processes and strategies include setting goals and planning, self-monitoring, self-evaluating, etc. Cognitive strategy use includes managing the learning environment, using rehearsal, organizational, and elaboration learning strategies, etc. Motivational processes and strategies include high self-efficacy, self-attributions, self-motivation, volition, etc. (Lindner & Harris, 1998).

Self-regulated learners systematically direct their thoughts, feelings, and actions toward the attainment of their goals. They are cognizant of, and more accurate in assessing their academic strengths and weaknesses, they have a repertoire of strategies they appropriately apply to tackle the day-to-day challenges of academic tasks, they engage in self-reflection, they are self-motivated, and they achieve their academic goals.

A

Page 27: MJECT V4 N1C - wiu.edu

25 MJECT Volume 4, Number 1

There exists a large body of evidence suggesting that metacognitive processing is a key component in self-regulated learning (Kauffman, 2004). One method of encouraging learners to use metacognitive strategies is to embed metacognitive prompts in the instruction. Few studies, however have actually investigated the effects of providing metacognitive prompts in instructional interventions, especially in online learning environments. Specifically, little research has been conducted investigating the degree to which learners are likely to use metacognitive prompts when given a choice in online learning environments.

With the aim of advancing research in this area of growing importance, the purpose of this paper is to report the results of an exploratory study that investigated the effects of providing metacognitive prompts in the instructional modules of an online course. The specific research questions were:

1. To what degree would students respond to

metacognitive prompts (by actually typing responses) when given a choice to respond or not while completing an online instructional module?

2. To what degree will students who respond (type in a response) to metacognitive prompts value the prompts and feel that by responding it helped them be more successful learners?

3. Will students who complete the metacognitive prompts achieve a higher grade in the course than students who do not respond to the prompts?

Methods Forty graduate students enrolled in an

introductory online Instructional Technology course at a Midwestern regional university participated in the study. There were 24 females and 16 males. As

the learners completed two instructional online modules (it took the learners about 45-90 minutes to complete each module), they were prompted to use metacognitive strategies about 15 times in the modules.

The students were not required to type in a response to the metacognitive prompts—responding was optional. For those who chose to respond, t responses to the prompts were recorded in a back-end database (using FileMaker Pro) for analysis purposes, specifically to determine the number of times each student responded to the prompts.

Following are some examples of the metacognitive prompts that were included in the online modules: a) “What do I already know about this topic in this

module?”, b) “To what degree did I under-stand the section I

just reviewed? What were the central messages in this section?”, and

c) "Now is a good time to ask yourself what kind of a grade you think you would receive if you had to take a quiz on the material you just reviewed?" Research Question #1 (To what degree would

students respond to metacognitive prompts [by actually typing responses] when given a choice to respond or not while completing an online instructional module?) was answered by analyzing the data collected in the database file.

Research Question #2 (To what degree will students who respond [type in a response] to metacognitive prompts value the prompts and feel that by responding it helped them be more successful learners?) was answered by collecting data via a questionnaire that was administered to all students asking their perceptions about the metacognitive prompts following the course.

Page 28: MJECT V4 N1C - wiu.edu

26 MJECT Volume 4, Number 1

To answer the third research question (Will students who complete the metacognitive prompts achieve a higher grade in the course than students who do not respond to the prompts?), the statistical program SPSS™ was used to compare the degree students responded to the metacognitive prompts with their grade for the course to determine if there was a significant correlation.

Results

With regards to the first research question, the results showed that most of the students chose to complete the metacognitive prompts when given a choice. For the first instructional module that the learners completed, 34 of the 40 students (85 percent) elected to respond to at least some of the prompts. Of the 34 students who responded to the metacognitive prompts, some responded more frequently than others. Thirty-five percent (12 students) of the 34 students responded to all of the prompts (see Table 1 for details).

Twenty-two of the 40 learners (55 percent) responded to at least some of the metacognitive prompts in the second instructional module. Of the 22 learners who responded to the prompts, six learners (27 percent) responded to all the prompts. Table 1 summarizes the results showing the degree the learners choose to respond to the metacognitive prompts for both instructional modules.

The results regarding the second research question were mixed. Fifteen students (out of the 34 students who responded to at least some of the metacognitive prompts) responded to a questionnaire asking their perceptions about the metacognitive prompts after they completed the course. The majority of the students (53 percent) felt the metacognitive prompts were worth the time they invested in them. Thirty-three percent were undecided and 13 percent felt that it was not worth their time to complete the metacognitive prompts. Sixty-seven percent of the students felt they did better in the course as a result of completing the metacognitive prompts. Thirteen percent were

Results for the First Instructional Module (n=34)

No. of Students Percentage

Answered all the questions: 12 35% Answered more than half but not all the questions: 13 38% Answered about half of the questions: 2 6% Completed less than half of the questions: 7 21%

Results for the Second Instructional Module (n=22)

No. of Students Percentage

Answered all the questions: 6 27% Answered more than half but not all the questions: 10 46% Answered about half of the questions: 4 18% Completed less than half of the questions: 2 9%

Table 1. Degree the Learners Responded to the Metacognitive Prompts

Page 29: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 27

undecided and 13 percent did not feel the prompts helped them to do better in course. Eighty percent of the students indicted that they will continue to use some of the strategies suggested by the metacognitive prompts for other courses they will take in the future. Table 2 summarizes the results of the learners’ perceptions to the metacognitive prompts.

To answer the third research question, the students’ grades for the course were correlated (using a Person Correlation) with how many times they responded to the self-monitoring prompts. The

results showed that there was no significant correlation between students who responded more frequently to the prompts and the grade they received in the course. The correlation between the number of times students responded to the prompts and their grades may have been attenuated due to a lack of variability in grades across participants. All the students received a grade of an A in the course except two students who received a grade of a B.

Discussion

Even though this was an exploratory study that

Students’ answers to the following question: To what degree did you think the metacognitive prompts were worth the time you invested in completing them to help you be a more successful learner? (n=15)

No. of Students Percentage

I strongly agree that it was worth the time invested 2 13.3% I agree that it was worth the time invested 6 40% I am undecided 5 33.3% I disagree that it was worth the time invested 2 13.3% I strongly disagree that it was worth the time invested 0

Students’ answers to the following question: Do you feel that you did better in the course as a result of completing the metacognitive prompts? (n=14)

No. of Students Percentage

I strongly agree that the prompts helped me to do better 3 20% I agree that the prompts helped me to do better 7 46.7% I am undecided 2 13.3% I disagree that the prompts helped me to do better 2 13.3% I strongly disagree that the prompts helped me to do better 0 No response 1 6.7%

Students’ answers to the following question: Do you think you will continue to use some of the strategies suggested by the metacognitive prompts for other courses you will take? (n=15)

No. of Students Percentage

Yes 12 80% No 3 20%

Table 2. Student’s Perceptions of the Metacognitive Prompts

Page 30: MJECT V4 N1C - wiu.edu

28 MJECT Volume 4, Number 1

did not include a large sample size, there were some interesting results from the study. The results showed that a majority of graduate students enrolled in this particular introductory Instructional Technology online course chose to respond to metacognitive prompts by typing in their responses. When given a choice whether to type their responses to the metacognitive prompts or advance to the next frame without typing in a response to the prompts, most students choose to take the time to type in a response to the prompts.

Of those students who typed in their responses to the metacognitive prompts, the majority of them (53 percent) felt that this activity was worth the time they invested. Sixty-seven percent of the students felt they did better in the course as a result of completing the metacognitive prompts. A very high percentage (80 percent) of the students also indicted that they will continue to use some of the strategies suggested by the metacognitive prompts for other online courses they may take in the future.

Further research needs to be conducted related to the issues addressed in this study. For example, it is interesting to note that the percentage of students who typed in responses to the metacognitive prompts dropped from 85 percent in the first module to 55 percent in the second module. It would be interesting to conduct further research by tracking how many students would choose not to respond after completing three or more instructional modules that included metacognitive prompts. Would the response rate decline after the students completed each instructional module? It would also be interesting to ask them why they chose to respond less frequently during the second module.

In addition, further research could be conducted using a more precise achievement measure than the final grade in the course. For example, an achievement test could be developed assessing the information presented in each online module. The achievement results for each student could then be

correlated with the degree the students responded to the metacognitive prompts to see if there is a significant correlation or not. It would also be interesting to conduct this study with undergraduate students to see if there are differences between how graduate students and undergraduate students respond to the metacognitive prompts.

References Allen, I.E., & Seaman, J. (2007). Online Nation:

Five Years of Growth in Online Learning. Needham, MA: Sloan-C. Retrieved 12/12/2009 from http://www.sloan-c.org/ publications/survey/pdf/online_nation.pdf

Gagne, R. M. (1985). The conditions of learning (4th ed.). New York: Hold, Rinehart & Winston.

Harris, B.R., Lindner, R.W., & Pina, A. (in press). Strategies to Promote Self-Regulated Learning in Online Environments. In G. Dettori & D. Persico (Eds.), Fostering Self-regulated learning through ICTs. Hershey, PA: IGI Global.

Kauffman, D.F. (2004). Self-Regulated learning in web-based environments: Instructional tools designed to facilitate cognitive strategy use, metacognitive processing, and motivational beliefs. Journal of Educational Computing Research, 30, 139-161.

Lindner, R.W. & Harris, B. (1998). Self-Regulated Learning in Education Majors. The Journal of General Education, 47 (1), 63-78.

Zimmerman, B. J., & Martinez-Pons, M. (1986). Development of a structured interview for assessing student use of self-regulated learning. American Educational Research Journal 23(4), 614-628.

Page 31: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 29

Utilizing Digital and Online Resources as Cognitive Tools to Enhance Digital Literacy Paul A. Schlag

“As I set out to revamp the Facility

Management course, increasing

students’ digital literacy and

helping them develop

technological skills were my primary considerations.”

any employers assume that today’s students are well-versed in using current

digital and online tools. Those born before the Digital Revolution often assume that the generation that grew up in the digital era have automatically gained skills in interacting with and developing digital content. Recently, however, my students’ abilities to evaluate digital information and use technological tools to present information in an analytically sound manner have decreased. Recognizing this decrease in skill and analytical abilities led me to examine the concept of digital literacy.

The Merriam-Webster Collegiate Dictionary (1993) defines literacy as the ability to read and write. It follows that digital literacy is the ability to read and write in a digital environment. In fact, many researchers have limited the defintion of digital literacy to

reading digital content (Carrington, 2005a; Carrington, 2005b; Kim & Anderson, 2008; Levy, 2009; Marsh, 2005; Pearman, 2008) and writing using digital tools (Davidson, 2009; Merchant, 2005a; Merchant, 2005b; Merchant 2008). While the preceding defintions are helpful, they are too limiting, since technology has radically altered the way people communicate.

Reading and writing were the first attemps at asynchronous communication to overcome the obstacles of time and space. However, digital technologies have made it possible for reading and writing to be virtually synchronous means of communicating. Further, digital tools now enable humans to communicate in ways separate from simply reading and writing (i.e. podcasts, videos, digital photos, etc.). Therefore, we must expand our definition of digital literacy. Borsheim et al. (2008) take the definition of digital literacy beyond reading and writing to include multiliteracies, a concept concerned with helping students develop

M

Page 32: MJECT V4 N1C - wiu.edu

30 MJECT Volume 4, Number 1

“literacy skills associated with consuming and producing [multimedia]” (p. 87). While this definition is a step in the right direction, it is still too limiting.

Aviram and Eshet-Alkalai (2006) developed a conceptual model of digital literacy consisting of six skills:

Photovisual Literacy

The ability to work competently within digital environments, in various GUIs (Graphical User Interface) that employ graphical communication.

Repro-duction Literacy

The ability to develop meaningful, authentic written work and art work by reproducing and manipulating existing visuals, digital text, and audio works.

Branching Literacy

The ability to present knowledge through a nonlinear navigation of knowledge domains (i.e. hypermedia and the Internet).

Information Literacy

The ability become a critical consumer of information by and identifying and filtering false and biased information.

Socio-emotional Literacy

The ability to communicate effectively via online communication tools (i.e. email, Facebook, Twitter, discussion boards and chat).

Real-time Thinking

The ability to consume, make sense of and evaluate large volumes of information in real time.

To these definitions I would add my own

understanding of digital literacy. Students should be able to: a) interact with and comprehend digital information, b) judge the veracity of digitally presented information, c) analyze and synthesize digital information and d) produce and present reliable and valid digital information in a skillful manner. With these comprehensive definitions of digital literacy in mind, it became apparent that most of the students in the Facility Management course were not as digitally literate as they ought to be.

Instructional Problem In the past, students in my Facility Management

course were given the task of putting together a proposal for constructing and operating a recreation facility of their choosing. They worked in groups of 5-6 people and were given a rubric with everything the proposal should include. The proposals were due at the end of the semester and were typically bound reports of 150-200 pages. Every week lectures were presented about a topic from the rubric with the intent that students would incorporate the principles from the lecture into the proposal.

Some students took the topics in the rubric and went above and beyond what was being taught in class by supplementing their knowledge through outside resources. Unfortunately, however, such dedication was the exception rather than the norm. Most of the students were simply copying work from other sources and pasting it into their proposal with very little of their own work and analysis evident. It was often unclear if they knew what the material they presented meant and why it was included in the proposal.

With regards to digital literacy, I noticed that most of my students were uncomfortable in digital environments other than Microsoft Office or social networking sites. In fact, many students even seemed unfamiliar and uncomfortable with those tools. Further, nearly all of my students were unable to work with and manipulate existing text and graphics in order to make sense of them and present them in a manner relevant to the proposal. The linear nature of the bound proposal allowed students to simply lump sections together without understanding how all the concepts were linked.

This problem was particularly acute as groups assigned topics from the rubric to individuals in the group, who then worked alone and did not coordinate the information they presented with what other members of their group were doing. The disjointed nature of many of the proposals illustrated

Page 33: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 31

that little effective communication between team members was taking place.

In addition, with the proposal due at the end of the semester, during the course of the semester I was often unaware of which students and/or groups did not sufficiently understand the concepts of facility management. Thus, since the projects were graded at the end of the semester, by the time I was able to recognize gaps in students’ knowledge and understanding it was too late to address the problem. The final proposals also demonstrated that many students were not able to evaluate sources for reliability and validity and moreover, had a difficult time critically analyzing the information gathered for inclusion in the proposal. Finally, the sheer volume of information presented was overwhelming to many students. Obviously the course needed to be redesigned.

Objectives

As I set out to revamp the Facility Management course, increasing students’ digital literacy and helping them develop technological skills were my primary considerations. The first objective for the revised course was for students to learn where to turn to obtain reliable information regarding the management of a recreation facility. They also needed to know how to analyze that information and make sense of it. Further, they needed to learn how to manipulate the information gathered and present it in a compelling manner. Increasing skills in working with and creating digital content would help them develop digital literacy and make them more hirable and marketable to future employers.

During course revision my objectives were to overcome several obstacles inherent in the previous course format by utilizing digital technologies. First, I wanted students to create a portfolio product which they could showcase to each other and to

future employers. Previously, only one copy of a facility proposal was printed and that resided in my office. As a result, students did not have access to the proposal in order to showcase their work to potential employers. Another objective was to be able to

use digital technologies so students could have easy access to examples of past and current proposals. Finally, I needed a way to determine which teams were unable to understand and analyze the information they were presenting in the proposal. With the preceding objectives in mind I then examined which learning theories would help me design the course in a manner that would most likely achieve these objectives while enhancing digital literacy and increasing technological skills.

Theoretical Framework of Course Revision

Constructivist principles (Vygotsky, 1978; Piaget, 1964, 1973 & 1985; Piaget & Inhelder, 1969) guided the revision of the Facility Management course. Following Constructivist thought, students could use digital tools to actively and subjectively construct knowledge. Some of the key tenets of Constructivism I sought to include in various digital tasks throughout the course were engaging learners in an environment of discovery and inquiry, in learning by doing, and in confronting authentic tasks, environments and purposes.

“Some of the key tenets of Constructivism I sought to include in various digital tasks throughout the course were engaging learners in an environment of discovery and inquiry, in learning by doing, and in confronting authentic tasks, environments and purposes.”

Page 34: MJECT V4 N1C - wiu.edu

32 MJECT Volume 4, Number 1

Another theory that informed the redesign of the facility management course is Brown, Collins, and Duguid’s (1989) Situated Cognition Theory. Collins (1988) defines Situated Cognition as learning knowledge and skills in contexts that reflect the way in which they will be used in real life. Therefore, Situated Cognition promotes the idea of immersing learners in an environment that approximates the context in which their new ideas, knowledge, skills and behaviors will be applied. Thus, the course was revised to encourage students to develop digital skills and to learn to use technological tools in a digitally literate manner, since this is what will be expected of them as they enter the workforce.

With Constructivism and Situated Cognition and their various educational methods as a framework, a need to use digital tools as cognitive tools became apparent. Using technology as cognitive tools is about learning “with” those tools, rather than learning “from” them.

Many educators continue to view digital technologies as tutors in the form of learning modules, which merely mimic an empirical behaviorist view of education. Again, this view maintains that an objective reality can be passed along from teacher to student. However, Jonassen’s (n.d.) conception of technology as cognitive tools (or mind tools) asserts, “Learners function as designers using the technology as tools for analyzing the world, accessing information, interpreting and organizing their personal knowledge, and representing what they know to others.” Thus, digital technologies allow students to critically analyze and present information in a meaningful manner. Students seek out pertinent information, analyze and deconstruct that information and then present the information using a variety of technological tools.

Pea (1985) posits that digital technologies can help learners “transcend the limitations of their minds, such as memory, thinking, or problem

solving limitations” (p. 178). Thus, students’ cognitive processing is focused more on creating and clarifying meaning and analyzing information than on process. For example, students could use the formulas built into Google Docs Spreadsheet Program to run budget calculations. Consequently, instead of using processing and memory attention to remember the formulas, students are able to concentrate that attention on determining what the resulting number from the calculation means. In short, digital resources may be used to enhance digital literacy by enabling students to focus on substance and analysis rather than process.

The following is my attempt to redesign the Facility Management course adhering to Jonassen’s and Pea’s conceptions of using technology as cognitive tools.

Course Revision

In the revised Facility Managament course I wanted to keep project-based learning principles (Blumenfeld, et al., 1991 & Thomas, et al., 1999) as the foundation of the course since it is a practical application of Constructivism and Situated Cognition. Therefore, students are given the same task of developing a recreation facility proposal of their choosing, but that proposal is now presented online via a website they develop.

Most of the students have never developed a website, so it requires a bit of scaffolding (Wood, Bruner & Ross, 1976) at the beginning to help them develop one. Students are encouraged to use online tools such as Weebly.com and Google Sites, which walk them through a fairly straightforward process of creating a website.

For the proposal students work in groups of 5-6 members (with approximately 60 people in the class) and we spend an initial class period setting up the sites, with me or other more-knowledgable-students in the class helping teams who struggle. Links to each group’s website are then posted on the course

Page 35: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 33

website so students can view each others’ work. Often, as groups ask questions about web development, I send them to another group that can help answer the question. Allowing students to view each others’ work increases motivation, provides examples so they can learn from each other and promotes healthy competition among the groups. Additionally, I am able to visit each group’s site often during the semester to ascertain their level of understanding of facility management concepts and assist or instruct when needed. Finally, as students create these websites, their technological skills increase and they begin to learn “with” the tool, thus meeting Jonassen’s and Pea’s definitions of using technology as cognitive tools.

It is informative here to discuss how the course is currently constituted following redesign. The class meets two times a week and the first session is designed to give students the tools, resources and knowledge they need to succesfully develop that week’s assigned component (each week they work on one component or piece of the entire proposal).

The next class period in a given week is set aside as lab time for them to work on the component that is due the following week. Also during this second session, individual groups show me their work on the component that is due that day. This is an opportunity for me to determine what they are missing, offer suggestions and clarify my expectations. It also allows students to hear each other explain their work on the component, enabling them to learn from each other. These sessions also allow me to discover if they comprehend the digital information they are presenting and are analyzing and synthesizing it appropriately.

Each group receives a grade every week for that week’s component and at the end of the semester a final project grade is assigned. Ideally, if students make the corrections and incorporate the suggestions given in our weekly meetings, they will have a high-quality, professional facility proposal that showcases their digital skills and digital literacy. From my perspective, the most important aspect of these

meetings is the ability to recognize the students who are simply regurgitating information without any critical reflection and analysis – those lacking digital literacy. From the first meeting I am able to encourage them to think

critically about what they are presenting and ponder why they are including it in the proposal. This encourages them to interact with the digital content in an analytical manner.

The course is also designed so that students are responsible to each other while working on the proposal. Any group member can be “fired” from a group with the consent of at least two group members. “Fired” students are then responsible for finding another group with whom they will work. If no other group will accept them, they do not receive points for the proposal. Further, students are given an opportunity to assign peer grades at the end of the semester based on individual performance in working on the proposal. These methods of accountability encourage group members to develop digital skills and present accurate information in a compelling manner.

While developing a website is the main medium of the proposal, it is not the only digital and online tool used in the course. There are 10 components that make up the entire proposal and these components utilize a variety of digital tools. For one

“. . . I am able to encourage them to think critically about what they are presenting and ponder why they are including it in the proposal. This encourages them to interact with the digital content in an analytical manner.”

Page 36: MJECT V4 N1C - wiu.edu

34 MJECT Volume 4, Number 1

component students create a hypothetical community needs survey to determine what type of recreation facility is needed and wanted in a hypothetical community. The survey must be online and they have to make up results and present their conclusions on their websites using text, graphs and charts.

Many students use Google Docs to create an online form that automatically collects data into a Google Docs Spreadsheet and then generates a report that analyzes survey responses. Students are then able to seamlessly and fairly effortlessly embed the survey and its’ results into their websites. Learning to develop online surveys and use digital tools to analyze the information gathered from that survey obviously increases students’ technological skills. In addition, as they develop the survey, they seek out other community attitude surveys as examples and begin thinking about what should be included in such a survey, what it should look like and how the results should be presented, thus enhancing digital literacy.

In many of the components of the proposal, students are asked to locate and link to Internet resources that relate to their proposal (websites, videos, articles, online stores, etc.). In each case they are asked to explain why the link is important to their facility proposal, which encourages them to analyze and synthesize the digital information. This task also helps students find facility-related websites/resources for their interest area and helps them to critically reflect on the information they find on the Internet. Thus, they are able to interact with digital information that might be of use in the future. During weekly meetings I have been able to see

which websites’ digital resources they linked to and discuss the veracity of different sources of information with them.

Perhaps the most intensive aspect of the new course is students’ creation of digital schematics, renderings and blueprints for their proposed facility. They locate and download CAD software and must figure out a way to integrate their renderings into their website. Most students incorporate their schematics, renderings and blueprints into the site using graphics. However, some have created video fly-throughs of their facilities in a most impressive manner (see for example http://www.youtube.com/ watch?v=HLlz0CjG3bc). Therefore, students not only begin to apply facility management concepts in

design, but they also gain skills in producing and presenting digital content in a meaningful and compelling manner.

There are many other content-related

components in the project which cannot be covered in this exposition (i.e. online budget reports, facility scheduling, etc.).

However, one final component of note that takes advantage of digital and online tools is the marketing component. Students create 2-minute commercials promoting their facility and embed the digital video they create into their sites. They also choose to develop two of the following advertising options: creating a flyer, composing a digital poster or recording a digital radio commercial. These tasks enable them to develop technological skills and interact with content in a manner that is relevant and meaningful. By creating these digital products, they are able to learn about and apply various marketing concepts.

“The new, step-by-step nature of the course (meeting with each group weekly) helps me to evaluate and make corrections based on how well they are interacting with and comprehending digitial information, are judging the veracity of it, are analyzing and synthesizing it and are producing reliable and valid digital products.”

Page 37: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 35

Results of the Course Revision The Facility Management course as it is

currently constituted strives to help students enhance digital literacy while increasing their ability to prepare and present professional facility proposals. They now have to interact with photovisual, reproduction, branching, information and socioemotional literacy in a meaningful manner to be successful in the course. They are able to look to me, outside resources and each other for assistance in each area.

The new, step-by-step nature of the course (meeting with each group weekly) helps me to evaluate and make corrections based on how well they are interacting with and comprehending digitial information, are judging the veracity of it, are analyzing and synthesizing it and are producing reliable and valid digital products. Further, students now know where to go for information about facility management concepts for their interest area and also have technological skills that will make them more hirable and marketable to employers.

In the revised course students are able to construct meaning as they see the proposal come together and see the links (literal and figurative) between the 10 components. They are also able to construct meaning as they realize how the skills they are developing will help them in their future careers (whether or not they remain in facility management). Knowing that their team members, peers and perhaps future employers will be able view their work has also increased the quality of projects and motivated them to develop skills and digital literacy.

Finally, students’ creativity has been unleashed in the new course and the creative nature of the proposals has increased dramatically between the initial course structure and the current one. Perhaps the digital and online tools have simply become an “extension of the mind,” which frees students to focus more energy on creativity, rather than process.

Conclusion Today’s students face an overwhelming

bombardment of information. The need for increasing digital literacy is a pressing problem which requires educators to encourage a greater depth of use and thought. It behooves educators to structure courses in a manner which enables students to sort through, evaluate, filter, analyze and present information with validity and reliability. Using digital and online tools as cognitive tools coupled with the aforementioned Constructivist educational techniques increased knowledge, skills and digital literacy for students in my revised facility management course.

References Aviram R, Eshet-Alkalai Y. Towards a theory of

digital literacy: three scenarios for the next steps. European Journal of Open Distance E-Learning 2006. from http://www.eurodl.org/index.php?p =archives&year=2006&halfyear=1&article=223 (accessed April 1, 2010).

Blumenfeld, P., Soloway, E., Marx, R., Krajcik, J., Guzdial, M., & Palincsar, A. (1991). Motivating Project-Based Learning: Sustaining the Doing, Supporting the Learning. Educational Psychologist, 26(3), 369 - 398.

Borsheim, C., Merritt, K., & Reed, D. (2008). Beyond Technology for Technology's Sake: Advancing Multiliteracies in the Twenty-First Century. The Clearing House, 82 (2), 87-90.

Brown, J. S., Collins, A., & Duguid, P. (1989). Situated Cognition and the Culture of Learning. Educational Researcher, 18(1), 32-42.

Carrington, V. (2005a). The uncanny, digital texts and literacy. Language and Education, 19(6), 467-482.

Carrington, V. (2005b). New textual landscapes, information and early literacy. Popular culture, new media and digital literacy in early childhood, 13-27.

Page 38: MJECT V4 N1C - wiu.edu

36 MJECT Volume 4, Number 1

Collins, A. (1988). Cognitive Apprenticeship and Instructional technology. (Technical Report No. 6899). BBN Labs Inc., Cambridge, MA.

Davidson, C. (2009). Righting Writing: What the Social Accomplishment of Error Correction Tells about School Literacy. Journal of Classroom Interaction, 8.

Jonassen, D. (n.d.) Technology as cognitive tools: Learners as designers [Electronic Version]. ITForum Paper, 1. Retrieved March 31, 2010 from http://itech1.coe.uga.edu/itforum/paper1/ paper1.html.

Kim, J., & Anderson, J. (2008). Mother-child shared reading with print and digital texts. Journal of Early Childhood Literacy, 8 (2), 213.

Lajoie, S., & Derry, S. J. (1993). Computers as cognitive tools. Hillsdale, N.J.: Lawrence Erlbaum Associates.

Levy, R. (2009). “You have to understand words, but not read them”: young children becoming readers in a digital age. Journal of Research in Reading, 32(1), 75-91.

Marsh, J. (2005). Ritual, performance and identity construction: Young children's engagement with popular cultural and media texts. In J. Marsh (Ed.), Popular culture, new media and digital literacy in early childhood (pp. 28-50). London; New York: Routledge Falmer.

Merchant, G. (2005a). Digikids: cool dudes and the new writing. E-Learning and Digital Media, 2 (1), 50-60.

Merchant, G. (2005b). Barbie meets Bob the Builder at the Workstation. In J. Marsh (Ed.), Popular culture, new media and digital literacy in early childhood (pp. 183-200). Milton Park, Oxon: Routledge Falmer.

Merchant, G. (2008). Digital writing in the early years. In J. Coiro, M. Knobel, C. Lankshear & D. Leu (Eds.), Handbook of Research in New Literacies (pp. 751-774). New York: Lawrence Erlbaum Associates.

Merriam-Webster Inc. (1993). Merriam-Webster's collegiate dictionary (10th ed.). Springfield, Mass.: Merriam-Webster.

Pea, R. D. (1985). Beyond amplification: Using the computer to reorganize mental functioning. Educational Psychologist, 20(4), 167-182.

Pearman, C. J. (2008). Independent Reading of CD-ROM Storybooks: Measuring Comprehension With Oral Retellings. The Reading Teacher, 61 (8), 594-602.

Piaget, J. (1964). Six Psychological Studies. New York: Vintage.

Piaget, J. (1973). The child and reality; problems of genetic psychology. New York: Grossman Publishers.

Piaget, J., & Inhelder, B. (1969). The psychology of the child. New York: Basic Books.

Piaget, J. (1985). Equilibration of cognitive structures. Chicago: University of Chicago Press.

Thomas, H., Mergendoller, J. and Michaleson, A. (1999). Project-based learning: a handbook for middle and high school teachers. Novato, CA: The Buck Institute for Education.

Vygotsky, L. & Cole, M. (1978). Mind in society : the development of higher psychological processes. Cambridge: Harvard University Press.

Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of child psychology and psychiatry, 17(2), 89-100.

Page 39: MJECT V4 N1C - wiu.edu

Volume 4, Number 1 MJECT 37

Page 40: MJECT V4 N1C - wiu.edu

ISSN 1938-7709 (print version)