Development and Evaluation of a Large-Scale Pyramidal .../67531/metadc... · Shivers, Audrey H....
Transcript of Development and Evaluation of a Large-Scale Pyramidal .../67531/metadc... · Shivers, Audrey H....
APPROVED: Richard G. Smith, Major Professor and Chair
of the Department of Behavior Analysis Karen A. Toussaint, Committee Member Todd A. Ward, Committee Member Thomas Evenson, Dean of the College of
Public Affairs and Community Service Mark Wardell, Dean of the Toulouse Graduate
School
DEVELOPMENT AND EVALUATION OF A LARGE-SCALE PYRAMIDAL
STAFF TRAINING PROGRAM FOR BEHAVIOR MANAGEMENT
Audrey H. Shivers, B.S.
Thesis Prepared for the Degree of
MASTER OF SCIENCE
UNIVERSITY OF NORTH TEXAS
August 2014
Shivers, Audrey H. Development and Evaluation of a Large-Scale Pyramidal Staff
Training Program for Behavior Management. Master of Science (Behavior Analysis), August
2014, 86 pp., 2 tables, 6 figures, references, 33 titles.
Training and empirically evaluating caregivers’ implementation of behavior management
skills is a particularly challenging task in large residential contexts. A pyramidal training
approach provides an efficient and effective way to conduct large-scale competency-based
behavior skills training. The purpose of this project was to develop and evaluate a large-scale
pyramidal staff training program for behavior management skills. One hundred nine caregivers
and 11 behavior service professionals at a large, residential care facility participated in this
project. Interobserver agreement was utilized to develop and refine measurements systems to
detect caregiver acquisition of skills, behavior service professionals’ ability to score caregiver
performance and behavior service professionals’ ability to deliver a specified portion of the
curriculum. Pre- and post-test probes were conducted utilizing standard role play scenarios and
checklists to evaluate caregiver acquisition of three specific behavior management skills. The
results supported the following conclusions: first, interobserver agreement measures were useful
to develop a reliable measurement system, to refine some curriculum elements, and to evaluate
measurement conducted by behavior service professionals. Second, behavior skills training
(BST) resulted in caregiver acquisition of all three behavior management techniques. Third, the
pyramidal training approach was effective to teach behavior service professionals to deliver BST
and accurately measure the performances of trainees.
ii
Copyright 2014
by
Audrey H. Shivers
iii
ACKNOWLEDGEMENTS
I would first like to thank my advisor Dr. Richard Smith for his mentorship, support, and
generous giving of his time. Your belief in this project, and in me, has meant more than words
can express. Over the last four years you have always made time to give advice, guidance, and
feedback and for that I am so grateful. I would also like to thank Katy Atcheson who has been an
instrumental part of this project, a wonderful mentor and most importantly a friend. I am a better
behavior analyst because of your example and mentorship. Thank you for the continuous support
throughout this project and the many hours you helped with OJT and data collection. Thank you
to my committee members, Dr. Karen Toussaint and Dr. Todd Ward, for your contributions and
feedback. I would like to thank everyone at the Behavior Analysis Resource Center for assisting
with the development and implementation of this project. I extend a special thanks to Danielle
Russell, Kay Treacher, and Elissa Hamilton who have been my rocks throughout graduate
school. I appreciate the many conversations of support, the laughter, and the best proofreading I
could ask for. Allison Russell, you have been “my person” since day one at UNT, I could not
have made it this far without your amazing friendship. Finally, I would like to thank my loving
family; you mean the world to me and have always believed in my ability to succeed. Thank you
for supporting me and providing me with the foundation I needed to become who I am today.
iv
TABLE OF CONTENTS
Page
ACKNOWLEDGEMENTS ........................................................................................................... iii CHAPTER 1 INTRODUCTION .................................................................................................... 1 CHAPTER 2 METHOD ................................................................................................................. 8
Caregiver Training .............................................................................................................. 8
Participants and Setting ........................................................................................... 8
General Employee Orientation: Foundational Behavior Management Training .... 8
On-the-Job Training (OJT): Competency-Based Behavior Management Training............................................................................................................................... 10
Behavior Service Professional Training ........................................................................... 17
Participants and Setting ......................................................................................... 17
Data Collection ..................................................................................................... 17
Pilot Structure and Sequence of Training Activities............................................. 18
Structure and Sequence of Training Activities ..................................................... 20
Exit Criteria ........................................................................................................... 22 CHAPTER 3 RESULTS ............................................................................................................... 23
Caregiver Training ............................................................................................................ 23
Behavior Service Professional Training ........................................................................... 26
Interobserver Agreement ...................................................................................... 26
Treatment Integrity ............................................................................................... 26
Caregiver Results .................................................................................................. 27 CHAPTER 4 DISCUSSION ......................................................................................................... 30 APPENDIX A STANDARD ROLE PLAY SCRIPTS ................................................................ 44 APPENDIX B ROLE PLAY CHECKLISTS ............................................................................... 48 APPENDIX C TREATMENT INTEGRITY CHECKLIST ........................................................ 52 APPENDIX D BEHAVIOR SERVICE PROFESSIONAL TRAINING MATERIALS PACKET....................................................................................................................................................... 54 REFERENCES ............................................................................................................................. 83
1
CHAPTER 1
INTRODUCTION
Effective staff training, particularly on a large-scale, presents a complex set of challenges
including, among other considerations, defining the desired performances, developing and
implementing training curricula, and measuring acquisition and demonstration of the target
skills. Thus, staff training is an important area for behavior analytic research and development.
The staff training literature emphasizes competency-based training and assessment of
performance-skills rather than verbal skills (e.g., answering questions on a quiz or test) as best-
practice for training staff (Parsons, Rollyson, & Reid, 2012). Methods of training have shifted
from primarily lecture-based training with written assessments, which only assess a trainee’s
ability to describe the target skill, to multi-component staff training packages. Staff training
packages typically include components such as feedback (Parsons & Reid, 1995), video
modeling (Catania, Almeida, Liu-Constant, & Reed, 2009; Neef, Trachtenberg, Loeb, & Sterner,
1991; Nielsen, Sigurdsson, & Austin, 2009), and role plays (Gardner, 1972). Increasingly, the
literature refers to behavior skills training (BST) which describes a four-component staff training
package incorporating instructions, modeling, rehearsal, and feedback (Homlitas, Rosales, &
Candel, 2014; Miles & Wilder, 2009; Parsons, Rollyson, & Reid, 2012; Rosales, Stone, &
Rehfeldt, 2009; Sarokoff & Sturmey, 2008). BST has been used to train a variety of skills such
as discrete trial teaching (Sarokoff & Sturmey, 2004, 2008), conducting functional analyses
(Iwata et al., 2000), conducting preference assessments (Lavie & Sturmey, 2002), and
implementation of the Picture Exchange Communication System (PECS) (Homlitas, Rosales, &
Candel, 2014; Rosales, Stone, & Rehfeldt, 2009). Although some earlier research (e.g., Iwata et
2
al., 2000; Lavie & Sturmey, 2002) does not specifically refer to the staff training package as BST
the components are the same.
The first component of BST is instructions, which is providing the trainee with a
description of the skill to be learned and the conditions under which it is to be implemented.
Didactic instruction or a written description of the skill are two formats in which instructions are
provided. Checklists provide a succinct description of the skill’s components that may be more
beneficial than a lengthy narrative description (Parsons, Rollyson, & Reid, 2012). Checklists are
often utilized prior to modeling targeted skills (Macurik, O’Kane, Malanga, & Reid, 2008;
Rosales et al., 2009).
The second component of BST is modeling; demonstrating the target skill for the trainee
as an imitative prompt. Role plays have been effectively used to model target skills (Gardner,
1972); however, a disadvantage of using role plays is the amount of time required to design and
conduct them. In attempts to increase the efficiency of BST video models have been evaluated as
a method for modeling targeted skills (Catania et al., 2009; Neef et al., 1991; Nielsen,
Sigurdsson, & Austin, 2009). Video modeling can be used at times and places that are
convenient for trainees and can be useful for large-scale application. Staff training packages
utilizing video models have effectively taught skills such as safe patient lifting (Nielsen,
Sigurdsson, & Austin, 2009), discrete trial teaching (Catania et al., 2009), functional analysis
procedures (Moore & Fisher, 2007) and implementation of client-specific behavior intervention
plans (Macurik et al., 2008).
The final two components of BST are rehearsal and feedback. Trainees practice the
targeted skills, in some cases with a confederate or trainer, and performance specific feedback is
provided by the trainer. Rosales and colleagues (2009) utilized confederate learners to train
3
caregivers to implement PECS. Trainees were provided both corrective and approving feedback
following each practice trial and trials continued until mastery was achieved. Parsons, Rollyson,
and Reid (2012) indicated that the final step of BST is repeating the rehearsal and feedback until
the trainee meets mastery and that “this final step represents the essence of the competency part
of BST” (p. 4).
Parsons, Rollyson, and Reid (2012) and Reid, Parsons, and Green (2012) described an
evidence-based protocol for training staff that was consistent with BST procedures. The
evidence-based protocol focused on effectively training performance-skills until the trainee
reached competency (i.e., an established mastery criterion for the skill). The first four steps of
the protocol are 1) describe the skill to be trained, 2) provide a written summary of the skill, 3)
demonstrate the skill for staff, and 4) have staff practice the skill while providing feedback
(Reid, Parsons, & Green, 2012, p. 53). These steps are consistent with the BST components of
instruction, modeling, rehearsal, and feedback. The fifth step is “repeat Steps 1, 3, and 4 until
staff demonstrate competence in performing the target skills” (Reid, Parsons, & Green, 2012, p.
53). Reid and colleagues (2012) stated that the steps “should be repeated until the staff trainer
observes each staff trainee perform the target skill correctly” (p. 60). Developing a measurement
system to empirically identify correct performance and assess competency should be a primary
focus when training new skills.
When developing behavioral measurement procedures, the validity and accuracy (i.e.,
“the extent to which observed values approximate to the events that actually occurred” (Johnson
& Pennypacker, 2009, p. 141)), should be assessed. Cooper, Heron, and Heward (2007) stated
that “data that are most useful and trustworthy for advancing scientific knowledge or for guiding
data-based practice are produced by measurement that is valid, accurate, and reliable” (p. 104).
4
Kazdin (1977) noted that reliability and accuracy are both evaluated by comparing an observer’s
data to another source and that the only difference is the confidence that the source of
comparison accurately reflects the behavior of interest (i.e., that the record against which the
observer’s data are compared reflect “true values” (Johnston & Pennypacker, 2009)). Thus,
utilizing the code-writer’s record as the standard and calculating interobserver agreement
between the code-writer and other observers assesses the reliability, validity, and accuracy of the
measurement system. Baer, Wolf, and Risley (1987, p. 316) state “in particular, when those
reliability assessments pair the observer with the code-writer, they accomplish the essential
validity of any observation-based behavioral analysis: They allow the empirical revision of the
code and thus of the stimulus control that it exerts over the observer’s observing and recording
behavior, until it satisfies the code-writer.” High interobserver agreement between the code-
writer and observers provides confidence that the measurement system is assessing the events
that actually occurred. Low interobserver agreement between the code-writer and observers
provides a basis for revising the code or calibrating the measurement system.
Interobserver agreement (IOA) is “the degree to which two or more independent
observers report the same observed values after measuring the same events” (Cooper, Heron, &
Heward, 2007, p. 113). IOA coefficients can be calculated several ways including total IOA,
interval IOA, exact IOA, block-by-block IOA, occurrence or non-occurrence IOA (Kahng,
Ingvarsson, Quigg, Seckinger, & Teichman, 2011). The formula typically used to calculate IOA
is agreements divided by agreements plus disagreements multiplied by 100. The threshold for
acceptable IOA is arbitrarily determined, though the literature supports considering 80% or
higher as acceptable (Kahng et al., 2011; Kazdin, 1977; Parsons, Cash, & Reid,1987; Repp,
Deitz, Boles, Deitz, & Repp, 1976). IOA is calculated to assess the competence of new observers
5
and to detect observer drift. Additionally, high IOA coefficients provide confidence that the
target behaviors are clearly defined and the measurement system can be easily used by new
observers and that changes in the data are representative of changes in behavior (Cooper, Heron,
& Heward, 2007). IOA was calculated in the current project, first, to assess the accuracy of the
measurement system and, second, to assess the competence of new observers.
The current project was initiated by the large, residential care facility in order to fulfill a
United States Department of Justice requirement that all direct care staff and supervisors of the
facility (approximately 1,600 at the time of the project) receive competency-based training on a
variety of skills including behavior management. In addition to incumbent staff, the facility hires
numerous (approximately 50) new employees (range = approximately 30-100) each month. One
of the disadvantages to evidence-based, competency-based training is the amount of time it
requires of both trainers and trainees to implement (Parsons, Rollyson, & Reid, 2012).
Additionally, large facilities face logistical problems of scheduling and the expense of training
large numbers of staff (Page, Iwata, & Reid, 1982). At the onset of the current project the facility
utilized a previously developed behavior management curriculum during new-employee
orientation (described below in General Employee Orientation: Foundational Behavior
Management Training), which was followed by on-the-job (OJT) training. The caregiver training
implemented in the current project was structured so that it utilized the current behavior
management curriculum and could be conducted within the existing OJT. The structure of the
existing OJT and the large number of caregivers receiving competency-based training for this
project required the development of a “decentralized” training system in which multiple
supervisors could implement training systems with their staff.
6
Pyramidal training programs, or “train-the-trainer” programs, train small groups of staff
to then train additional staff members (Page et al., 1982). Page and colleagues (1982) extended
the use of pyramidal training into a large, institutional setting. Three supervisors were trained to
provide training to 45 direct care staff. Following supervisor training, improvements were seen
in the direct care staffs’ teaching behavior. These results indicate that training supervisors
produced changes in caregivers’ behavior. By using a pyramidal approach the number of
caregivers trained by each supervisor was reduced, trainers were able to integrate training within
daily routines (eliminating the need to schedule caregivers away from work), and that trainers
were present in the work environment to promote maintenance of the skills (where senior
supervisors were not frequently present). The large number of direct care staff that could be
trained using a pyramidal approach was also noted as an advantage by the authors. Pyramidal
training programs have been used to train trainers to teach a variety of skills such as safety-
related skills (van den Pol, Reid, & Fuqua, 1983), implementation of behavior-reduction
procedures in an institution, (Shore, Iwata, Vollmer, Lerman, & Zarcone, 1995), training parents
to implement behavior intervention plans (Kuhn, Lerman, &Vorndran, 2003), and conducting
preference assessments (Pence, St. Peter, & Tetreault, 2012). In each of these cases pyramidal
training resulted in an improvement in the trainees’ behavior and increased dissemination of the
training procedures.
The current project sought to achieve the following goals: 1) develop a competency-
based training program to train caregivers in three behavior management skills, 2) use a
pyramidal training approach to train behavior service professionals to disseminate the caregiver
training, and 3) develop and refine measurements systems to detect caregiver acquisition of
7
skills, behavior service professionals’ ability to score caregiver performance and behavior
service professionals’ ability to deliver a specified portion of the curriculum.
8
CHAPTER 2
METHOD
Caregiver Training
Participants and Setting
Caregiver training was provided to 68 caregivers working in a large, state residential care
facility for adults with intellectual and developmental disabilities. At the time of the study, the
facility employed approximately 1,600 staff across seven units providing services to
approximately 500 residents in 40 different homes. These caregivers were trained directly by the
primary trainer across a period of eighteen months. Caregivers were at least 18 years of age at
the time of training and ranged to approximately 50 years. Most caregivers had obtained a high
school diploma; however, high school or equivalent education was not required for employment.
Most caregivers spoke English as the first language, although English was the second language
for a substantial percentage of participants.
Procedures were conducted in clinical, office, conference, and/or computer lab settings
located on the campus of the residential care facility. All rooms contained a video monitor and
computer on which video recordings could be played. All training rooms contained areas where
individual role plays could be conducted privately.
General Employee Orientation: Foundational Behavior Management Training
Prior to participating in competency-based behavior management training all direct
support staff attended two 8-hour training sessions covering positive behavior management and
supports. The first 8-hour session was part of the new employee orientation process for all
employees (i.e., all employees, including those whose positions did not include regular contact
with residents, attended this training). The second 8-hour session was attended by employees
9
whose positions included regular contact with residents, including direct contact personnel, home
team leaders, speech therapists, behavior management professionals, etc. All classes were taught
by graduate student trainers employed by the Behavior Analysis Resource Center (BARC). The
instructors utilized lecture, video examples, role play examples, and role play practice
throughout the course.
The course curriculum included a basic overview of behavior principles and procedures,
including reinforcement, punishment, coercion, rapport building, imitation, shaping, and
chaining. Some facility-specific policies and procedures were addressed, including standardized
behavioral data collection procedures and the components of individualized intervention plans.
Finally, the trainers taught some specific approaches to behavior management. The content of
this section of the curriculum was based on techniques and procedures described by Latham
(1994) and was adapted from previous versions used to train foster parents (Stoutimore,
Williams, Neff, & Foster, 2008), parents identified as at-risk for child abuse or maltreatment
(Berard & Smith, 2008), and caregivers in community settings (Van Camp et al., 2008). Three
specific “tools,” or general behavior management techniques, were targeted for acquisition: use
reinforcement, pivot, and redirect/reinforce (later changed to protect/redirect). Each tool was
operationally defined via a series of steps on a checklist. Use reinforcement was taught as a tool
to increase appropriate/desirable behaviors. Pivot was taught as a tool to decrease behavior that
is annoying but not physically dangerous or harmful to persons or property (this type of behavior
was characterized as “junk behavior”). Finally, redirect/reinforce was used to decrease behaviors
that are physically dangerous or destructive. The curriculum included instruction on how and
under what conditions to use each tool. These tools, along with fundamental behavior principles
and facility-specific procedures, comprised the foundation of the initial behavior management
10
curriculum presented to all employees during the facility’s comprehensive orientation process.
Trainers assessed participants’ acquisition of the material presented during the orientation
process via a multiple-choice quiz, a fill-in-the-blank quiz, and a role play check-off.
Approximately eight months after initiating the current project, several changes in the
structure and content of the curriculum were introduced. The redirect/reinforce tool was
modified to improve correspondence between the behavior management curriculum and policies
and procedures of the facility. The redirect/reinforce tool was retitled “protect/redirect” to reflect
these changes. For clarity the tool is referred to as protect/redirect from this point forward. In
addition, the sequence in which the training content was delivered was altered and redundancies
in the content were omitted in order to improve the flow and efficiency of the training.
On-the-Job Training (OJT): Competency-Based Behavior Management Training
Following completion of foundational behavior management training, competency-based
training procedures were implemented for employees whose positions included behavior
management responsibilities, including direct contact personnel and behavior management
professionals. Competency-based training was delivered using role plays, pre- and post-tests, and
video modeling procedures and was presented across three 2-hour sessions. The three sessions
occurred during the monthly on-the job training (OJT) for new employees. OJT lasted
approximately two weeks and provided caregivers opportunities to demonstrate a variety of
essential skills (meal preparation, lifting procedures, behavior management training, etc.). The
number of caregivers trained during OJT varied from one to eight, depending on facility hiring in
a given month. The structure of the competency-based behavior management training presented
during OJT is described in the following sections.
11
Structure and sequence of training activities. The OJT training was presented during
three, 2-hour sessions. During the first session, the instructor presented an outline of the three 2-
hour sessions. Each caregiver then completed a role play pre-test for each specific tool; no
performance-specific feedback was provided. Video modeling was presented following the role
play pre-test. The video vignettes depicted three contrived examples, featuring the primary
trainer and additional trainers, that corresponded to each specific tool. Each vignette was
preceded by a slide showing the name of the tool and describing the role of each person in the
video (i.e., caregiver or client). Initially, trainers provided no discussion or feedback during the
vignettes; later in the project, the trainer watched and discussed the vignettes with the caregivers.
Finally, facility-specific data collection procedures were described and caregivers practiced data
collection.
During the second session, caregivers completed individual role play post-tests for each
tool. Following the role play post-tests the caregivers watched a 10-minute data collection video
depicting graduate students acting out scenarios including instances of problem behavior. The
caregivers independently collected data using the procedures taught in the first training session,
and the trainer assessed the data records for accuracy and discussed the outcomes with the
caregivers. This procedure was used to assess and improve the caregivers’ knowledge of facility
specific data collection techniques. Finally, all caregivers with a common caseload assignment
received training on the general format and structure of the facility’s standardized behavior
management document and received individual client behavior plans for their caseload. The
caregivers from a common caseload met with the trainer and reviewed one of those plans within
a group context. Subsequently, each caregiver completed open book, fill-in-the-blank quizzes
12
corresponding to behavior management plans from their caseload. Quizzes were completed
outside of the training context and were due at the third training session.
During the third training session the trainer reviewed the fill-in-the-blank quizzes for
accuracy and completion. The trainer and caregivers discussed the outcomes, as well as any
remaining issues and questions, such as the use of the three behavior management tools with
specific clients and clarification of individual behavior plans. Facility paperwork documenting
the completion of training was completed during this final training session.
Role play pre- and post-tests. Acquisition of the three specific tools (reinforcement,
pivot, protect/redirect) was assessed via three interactive, standardized role play scenarios (see
Appendix A for the standard scenario scripts). The primary trainer developed a standard role
play scenario for each tool, in which the trainer assumed the role of a client and engaged in
appropriate, junk, or dangerous behavior, as appropriate to the scenario. Caregivers assumed the
role of direct care staff and attempted to implement the relevant behavior management tools.
Each scenario provided the caregiver at least one opportunity to correctly or incorrectly
demonstrate each step of the tool as it had been described during foundational behavior
management training (the exception was a single step during the reinforcement role play which
was only available for evaluation if the caregiver placed a demand).
Data collection. Caregiver performances were scored to determine the accuracy with
which the tools were implemented. Checklists were developed to evaluate caregivers’ acquisition
of the three specific tools (see Appendix B for the checklists). The checklists listed specific steps
of each tool and data collectors scored each step as yes, no, or n/a. Yes was scored if the
caregiver correctly demonstrated a given step. No was scored if the caregiver implemented a step
incorrectly or failed to demonstrate a given step. N/A was scored if there was not an opportunity
13
for the caregiver to demonstrate a given step (e.g., Step 5 of reinforcement). The primary trainer
served as primary data collector for the majority of sessions. Because the trainer also served a
role in the role plays, checklists were completed immediately after, instead of during, the role
plays.
Interobserver agreement: General procedures. Interobserver agreement (IOA)
coefficients were utilized for two distinct purposes throughout the project. First, IOA was used to
assess and refine the measurement system components (e.g., checklists, behavioral definitions,
etc.) used to assess caregivers’ acquisition of the specific behavior management techniques. For
this purpose, two trainers (i.e., the code-writer and a trainer) scored each session independently
and the results were compared to obtain an IOA coefficient (see IOA scoring details below).
Second, IOA was used to assess behavior service professionals’ proficiency to score caregivers’
responses and to calibrate the behavior service professionals’ data collection performances. For
this purpose, one trainer and one behavior service professional scored caregiver responses and
the results were compared to obtain an IOA coefficient. Details about the specific procedures by
which IOA was used for these purposes are provided in the relevant sections below.
All general IOA procedures were consistent throughout the project. Two observers
independently scored caregiver (or, in the case of role plays and video models, trainer)
behaviors. IOA was assessed for each step on the checklists and agreement was scored if both
observers recorded the same code (yes, no, n/a) for a given step. A disagreement was defined as
the data collectors scoring different codes for the same step or one data collector failing to score
a step. IOA coefficients were obtained by summing the number of agreements, dividing the
result by the sum of agreements plus disagreements, and multiplying the outcome by 100.
Overall IOA was calculated for individual caregivers’ performances on a step-by-step basis for
14
each tool. IOA for each step was calculated across multiple caregivers’ data, on a caregiver-by-
caregiver basis for each specific step.
Table 1 depicts the IOA coefficients calculated on a caregiver-by-caregiver basis for each
step of each tool. Analyzing IOA for each step across multiple caregivers permitted analyses of
IOA coefficients for each step for all tools, thus revealing patterns of IOA on a step-by-step
basis. This permitted identification of steps that needed revision, whereas overall IOA may have
masked areas needing improvement. Coefficients below 80% were considered low and in need of
revision. In most of these cases changes were made to measurement system components (e.g.,
checklists, behavioral definitions, etc.). Instances in which changes were not made following low
IOA (or were made after multiple tests with low IOA) were based upon the trainer’s clinical
judgment or the need for further observations in order to make the most appropriate refinement.
The pivot tool provides an illustrative example of how IOA was utilized to target areas of the
measurement system for refinement. Following Test 1 checklists for all tools, including the pivot
checklist, were modified to include clarifying statements for data collectors/trainers. Following
Test 2 the IOA coefficients for steps two and four of the pivot checklist were both 75%.
Subsequently, the clarifying statements for these steps were further refined (details regarding the
checklist refinements are described in the following section). The IOA coefficients increased to
100% for Test 3 and remained above 80% for all subsequent tests. This process is representative
of the decisions made to refine the checklists following low IOA.
In some cases refinements to the measurement system components occurred in the
absence of low IOA (e.g., modifying the clarifying statement of Step 5 of the reinforcement tool
following Test 9). These changes were based on the trainer’s clinical judgment and/or in
preparation for dissemination of the measurement system. For example, a highly specific
15
clarifying statement pertaining solely to the standard role play (e.g., mark “yes” if ignores
statements like “stupid broom or stupid floor”) was replaced with a more general clarifying
statement (e.g., mark “yes” if ignores whining) in order to insure that the clarifying statements
were not tailored specifically to the standard role plays increased the generality of the
measurement system (e.g., the same measurement system could be used to assess caregiver
performance in novel role plays or the natural environment).
The number of steps with low IOA decreased following refinements to the measurement
system. Following the third refinement, the number of steps with low IOA decreased to 1, 0, and
2 for reinforcement, pivot, and protect/redirect respectively. At that time, all steps of the pivot
tool remained above 80% for nine of the eleven subsequent tests and did not require further
refinement. Similarly, the final refinement for reinforcement was made following the fifth
refinement (occurring after Test 9) subsequently IOA only fell below 80% on one step during
Test 15. IOA coefficients for protect/redirect increased significantly from Test 1 to Test 7,
although low IOA coefficients on 3 specific steps (3, 4, and 7) were observed once each during
Tests 7 through 9. Subsequently, when the redirect/reinforce tool was modified to improve
correspondence between the behavior management curriculum and policies and procedures of
the facility the checklist corresponding to that tool was substantially altered. The IOA
coefficients were 100% for all steps during Test 10, the first test utilizing the revised tool and
checklist. Sustained high IOA coefficients within the tool provided sufficient evidence that the
trainers could accurately measure caregivers’ acquisition of the three behavior management tools
with the checklists. Additionally, following Test 10 the primary trainer began implementing the
behavior service professional training described in the relevant sections below.
16
Checklist refinement. The data collection checklists were refined seven times during the
project. Checklist refinements are indicated in Table 1 with a dashed line and an asterisk notes
the specific steps that were altered. The original checklists listed each step for each tool and, in
some instances, provided examples of acceptable caregiver behavior for a given step. The results
for Test 1 indicated low IOA for the majority of steps for reinforcement and protect/redirect. The
first and most significant of the seven refinements occurred in response to these results.
Clarifying statements were added for each step of the checklist for all three tools. These
statements included: examples of caregivers’ behaviors (e.g., relaxed body posture, positive
affect), time limits on responding (e.g., “within 3 seconds of client saying ’I’m sweeping’”), and
instructions for scoring the code “no” on a particular step (e.g., “Mark No if staff talks to client
before they begin blocking”). The second refinement occurred following Test 2. Additional
clarifiers were added for two steps in reinforcement, two steps in pivot, and four steps of
protect/redirect. These clarifiers consisted of additional examples of when to score “yes” or “n/a”
for a particular step. Similar clarifiers were added at each subsequent refinement. After IOA
reached an acceptable level no additional changes occurred.
A substantial revision of the protect/redirect checklist occurred between Test 9 and 10.
The tool, which was previously titled redirect/reinforce, was changed to improve correspondence
between the behavior management curriculum and policies and procedures of the facility, and the
checklist was revised to correspond with those curricula changes. The steps of the tool were
reordered and condensed from eight to six. All previous clarifiers were removed and the refined
checklist contained only the steps of the modified protect/redirect tool. An additional, minor
refinement was made to the tool following Test 10. The final checklist refinement occurred when
the first two steps were restructured. Initially Step 1 was “get within arm’s reach of the person
17
and say stop/phrase once or nothing at all” and Step 2 was “physically intervene to stop/block the
dangerous behavior without talking.” These were revised to “get within arm’s reach of the
person and physically intervene to stop/block the dangerous behavior within 10s” (Step 1) and
“can say stop/phrase once and continue blocking without talking” (Step 2).
Behavior Service Professional Training
Participants and Setting
Eleven behavior service professionals of the residential care facility were trained to
implement the caregiver training package. All behavior service professionals had obtained a
graduate degree in psychology or related discipline, and seven of the behavior service
professionals were board certified behavior analysts. Training of the behavior service
professionals took place during the last ten months of the project.
Procedures were conducted in clinical, office, conference, and/or computer lab settings
located on the campus of the residential care facility. All rooms contained a video monitor and
computer on which video recordings could be played. All training rooms contained areas where
individual role plays could be conducted privately.
Data Collection
Two sets of data were collected during the training of the behavior service professionals:
treatment integrity (TI) and caregiver performance. The data collection procedures remained
consistent throughout the behavior service professional training. Caregiver performances were
scored, using the same procedures in caregiver training, to determine the accuracy with which
the caregivers implemented the tools. Treatment integrity was used to assess the behavior service
professionals’ ability to deliver the standard role play to the caregivers. IOA for each step was
used to assess behavior service professionals’ ability to score caregivers’ responses accurately
18
and as a basis to provide feedback on the behavior service professionals’ data collection
performances. Overall IOA was used for exit criteria, described in detail below.
Treatment integrity checklist. The primary trainer developed a treatment integrity
checklist to score the behavior service professionals’ responses (see Appendix C). The primary
trainer used the three standard scenario scripts (see Appendix A) as the basis for the checklist.
The checklist contained an initial step regarding a standard instruction given to caregivers, six
steps for conducting the reinforcement role play, six steps for conducting the pivot role play, and
eight steps for conducting the protect/redirect role play. Each step was scored by the trainer as
yes, no, or n/a. Yes was scored if the behavior service professional correctly demonstrated a
given step. No was scored if the behavior service professional implemented a step incorrectly or
failed to demonstrate a given step. N/A was scored if there was not an opportunity for the
behavior service professional to demonstrate a given step (e.g., Step 3 of protect/redirect).
A second trainer observed behavior service professional training sessions and
independently collected treatment integrity data for 18% of role plays conducted by behavior
service professionals (limited resources prevented a second trainer from being present for a
larger percentage of role plays). IOA coefficients were calculated between the two trainers for
those role plays. The average overall IOA for treatment integrity on the reinforcement role plays
was 85% (range = 33%-100%). The average overall IOA for treatment integrity on the pivot role
plays was 90% (range = 50%-100%). The average overall IOA for treatment integrity on the
protect/redirect role plays was 92% (range = 87.5%-100%).
Pilot Structure and Sequence of Training Activities
Pilot Group 1. The behavior service professional training was initially piloted with
Behavior Service Professionals 101 and 102. The structure and sequence of training varied from
19
that used with behavior service professionals in Pilot Group 2 (103, 104, 105, 106, 107) and
behavior service professionals in all subsequent training groups. The two behavior service
professionals in Pilot Group 1 attended two caregiver trainings conducted by the primary trainer,
observed during the first training and collected data on the caregivers’ behavior during the
second. Prior to the second training the behavior service professionals viewed seven video
vignettes used for data collection practice.
The vignettes depicted the standard role play scenarios with the primary trainer and an
additional trainer assuming the roles of trainer and caregiver respectively. Two vignettes were
created for each tool: one in which the caregiver demonstrated all the steps correctly and a
second vignette in which the caregiver made common errors. A third vignette depicting caregiver
errors was created for protect/redirect. The behavior service professionals in Pilot Group 1
viewed and scored each video independently. IOA was calculated, using the general procedures
described above, between each behavior service professional and the trainer for each vignette.
Finally, each behavior service professional conducted caregiver training in the presence
of a trainer. The behavior service professionals conducted the three standard role plays and
collected data using the checklists. The trainer scored caregiver responses for the purposes of
calculating IOA and treatment integrity data. The trainer provided praise and corrective feedback
to the behavior service professionals following the role plays and discussion of the vignettes used
in caregiver training.
Pilot Group 2. Training for the second pilot group was conducted as described in the
following sections with two exceptions. First, participants in Pilot Group 2 were not required to
complete the prerequisites; however, three of the five behavior service professionals voluntarily
attended the second 8-hour training session of General Employee Orientation. Second, during the
20
second training session the behavior service professionals did not receive a script for the
discussion points for the vignettes during caregiver training; instead, these participants received
the script to prior to the feedback phase of behavior service professional training.
Structure and Sequence of Training Activities
Prerequisites. Prior to training on the implementation of the caregiver training package
each behavior service professional completed two prerequisite tasks. First they attended the
second 8-hour training session of General Employee Orientation, during which the three
behavior management tools taught during OJT were presented. Second, they were required to
participate in the caregiver training taught by the primary trainer. Participation in caregiver
training ensured that behavior service professionals could demonstrate the behavior management
tools and introduced them to the structure and sequence of the caregiver training.
Training sessions. Behavior service professionals were trained in groups; the average
group size was four behavior service professionals. Training took place in three parts: two 2-hour
training sessions and feedback. During the first 2-hour training session the primary trainer
described the structure and sequence of training activities and the behavior service professionals
practiced data collection procedures. The primary trainer provided the behavior service
professionals a materials packet containing a diagram of the behavior service professional
training structure and sequence, a flow chart of the caregiver training structure and sequence, a
written schedule with instructions for caregiver training, scripts for the three standard role plays,
discussion points for video modeling, checklists for data collection practice, and a copy of the
materials used during caregiver training (see Appendix D for materials contained in the packet).
Following the discussion of the materials packet the behavior service professionals practiced data
collection on the standard role play scenarios.
21
The seven video vignettes described above (see Pilot Group 1 section) were used for data
collection practice. The behavior service professionals viewed each video as a group and
independently collected data. Following each vignette the primary trainer disclosed the correct
answers and discussed the scoring with the behavior service professionals. IOA was calculated,
using the general procedures described above, between each behavior service professional and
the primary trainer for each vignette.
The second two hour training day consisted of the behavior service professionals
assuming the role of trainer and conducting the three standard role plays. The primary trainer
assumed the role of caregiver in these practice scenarios. Practice scenarios were provided in
which the primary trainer (caregiver) correctly demonstrated the behavior management tools as
well as scenarios in which the primary trainer (caregiver) intentionally erred. The primary trainer
provided feedback after each practice scenario. Following completion of the practice scenarios
the behavior service professionals practiced the discussion points associated with the nine video
vignettes used in caregiver training.
The third phase of behavior service professional training was feedback training sessions.
The behavior service professionals implemented the caregiver training while one or more trainers
observed, collected data, and provided feedback. All behavior service professionals completed at
least two feedback sessions since the trainer observed session one and session two of the first
caregiver training each behavior service professional conducted. For caregivers trained after Pilot
Group 2 feedback sessions continued until the behavior service professional met exit criteria.
Prior to the development of exit criteria the primary trainer based completion of training on
visual analysis of IOA and treatment integrity data.
22
Exit Criteria
Exit criteria were developed after the training of Pilot Group 2 as a standard for
terminating the feedback phase of behavior service professional training. Exit criteria was based
on IOA and treatment integrity data. Behavior service professionals were required to meet both
criteria prior to independently implement the caregiver training. During caregiver training IOA
below 80% was considered low, the same standard was used for exit criteria. Initially behavior
service professionals were required to obtain a mean overall IOA coefficient of at least 80% on
the final three caregiver performance data sets they collected for each tool. This was revised to
obtaining a mean overall IOA coefficient of at least 80% on three consecutive caregiver
performance data sets collected for each tool. The revision was made after reviewing IOA
coefficients for several behavior service professionals with overall high IOA coefficients with an
exception in one of the last three data sets initially used to calculate exit criteria. Treatment
integrity data assessed the behavior service professionals’ ability to deliver the standard role
plays to the caregivers. Standard role plays were a necessary component for accurately scoring
the caregivers’ behavior using the checklists and assessing if skill acquisition occurred on the
part of the caregiver. Therefore the primary trainer felt 100% accuracy was necessary for
treatment integrity. The behavior service professionals were required to score 100% on at least
three role plays for each tool.
23
CHAPTER 3
RESULTS
Caregiver Training
Caregiver performances were scored to determine the accuracy with which the tools were
implemented. Figure 1 shows the caregivers’ average pre- and post-test scores for each tool,
summarized across all caregivers trained by the primary trainer. There was an increase in
accuracy of implementation from pre- to post-test for each of the three tools. The greatest
increase in caregiver accuracy was observed for the protect/redirect tool. On average, caregivers
scored lowest on the protect/redirect pre-test (62.5%), thus providing the opportunity for the
largest amount of change. By contrast, caregivers implemented the reinforcement tool with
higher average accuracy (88.9%) during the pre-test, creating a ceiling effect for the amount of
change that could occur.
Figure 2 shows the results of caregiver training grouped into four quartiles (0-25%, 26-
50%, 51-75%, and 76-100%) according to pre-test scores. Pre-test quartiles were used to observe
general patterns in the outcomes that were not evident in the aggregate scores for each tool
displayed in Figure 1. Pre-test score quartiles allowed for the trainer to display the difference in
the percentage of caregivers who scored generally low versus generally high on the pre-test for
each of the tools. Pre-test score quartiles were also used to examine if there were differing post-
test outcomes for caregivers who scored generally low on the pre-test versus those who scored
generally high on the pre-test. The top panel of Figure 2 depicts the percentage of caregivers
scoring in each quartile for each tool. These outcomes show that, in general, a “stair step” pattern
is apparent across quartile bins, with the fewest participants scoring in the bottom quartile,
increases in the percentage of scores in successive quartiles, and the highest percentage of scores
24
in the top quartile. Scores for the protect/redirect tool deviated slightly from this pattern, with
fewer scores in the third quartile relative to the second quartile. Fewer scores in the lowest
quartile predicted a greater percentage of scores in the top quartile across tools. For example, the
lowest percentage of pre-test scores in the bottom quartile was observed for the reinforcement
tool, which was also associated with the highest percentage of pre-test scores in the top quartile.
By contrast, the highest percentage of pre-test scores in the bottom quartile was observed for the
protect/redirect tool, which was associated with the lowest percentage of pre-test scores in the
top quartile.
The middle panel of Figure 2 depicts the mean post-test score for each tool, grouped
across pre-test score quartiles. These results show that, across tools, the lowest mean post-test
scores were produced by caregivers with the lowest pre-test scores and the highest mean post-test
scores were produced by caregivers with the highest pre-test scores. Again, a generalized “stair
step” pattern can be seen across quartiles, with the exception of the reinforcement tool, which
showed a higher average post-test score in the second quartile relative to the third quartile. Mean
post-test scores approached 100% across all three tools for caregivers whose pre-test scores were
in the top quartile.
The bottom panel of Figure 2 depicts the mean percent change on each tool, grouped
across pre-test score quartiles. Percent change for each caregiver was calculated by subtracting
the pre-test score from the post-test score, dividing the result by the pre-test score and then
multiplying by 100. For the purposes of calculation the four caregivers who scored 0% correct on
the pre-test were assigned a pre-test score of 1%. To accommodate the substantial percent
change seen in the first pre-test score quartile for the reinforcement and pivot tools, without
masking the results from the other pre-test quartiles, the y-axis was broken at 120% and resumes
25
at 3000%. It is important to note that as pre-test scores increase, the amount of possible percent
change from pre- to post-test decreases (e.g., whereas the percentage increase from 1% to 100%
is 9,900%, the percentage increase from 99% to 100% is 1%). Caregivers with lower pre-test
scores (0%) had the largest percent change from pre- to post-test (4883%). As expected, the
mean percent change was lower for caregivers scoring in the higher pre-test score quartiles. The
mean percent change for caregivers scoring in the 76-100% pre-test score quartile for the pivot
tool was -1%. This was a result of all the caregivers in that pre-test score quartile scoring 100%
on the pre-test and two of those caregivers scoring less than 100% on the post-test.
Figure 3 depicts representative individual pre- and post-test scores for four caregivers
(011, 016, 052, and 056). Caregiver 011 scored in the first quartile (0-25%) during the pre-test
for all three tools. A large increase was seen in 011’s accuracy across all three tools. Caregiver
016 scored in the second quartile (26-50%) during the pre-test for the pivot and protect/redirect
tools and in the fourth quartile for the reinforcement tool. Although increases in correct
implementation are apparent for the pivot and protect/redirect tools, a slight decrease in 016’s
implementation of the reinforcement tool was seen in from pre- to post-test. This is
representative of the most common decrease in accuracy. As noted in the Method section,
caregivers had the opportunity to demonstrate Step 5 of the reinforcement tool only if they
placed a demand to the trainer. Caregivers whose scores decreased on the reinforcement post-test
typically did not place a demand during the pre-test; therefore, for those caregivers Step 5 was
assessed only during the post-test and, for those who failed to appropriately respond a decrease
in overall correct implementation was scored. Caregiver 052’s data are representative of
participants who scored in the third quartile (51-75%) on the pre-test for pivot and
protect/redirect tools, while scoring 100% for the reinforcement tool. This participant maintained
26
100% accuracy during the reinforcement post-test and increases in accuracy were scored for the
pivot and protect/redirect tools. The final panel in Figure 3 depicts caregiver 056’s pre- and post-
test scores. Caregiver 056 scored in the fourth quartile (76-100%) on the pre-tests for all three
tools. The scores of 100% maintained for post-tests for the reinforcement and pivot tools and
accuracy of implementation of the protect/redirect tool increased to 100% at the post-test.
Behavior Service Professional Training
Interobserver Agreement
Interobserver agreement coefficients were calculated between the behavior service
professionals and a trainer in order to assess the behavior service professional’s ability to score
caregivers’ responses accurately. IOA coefficients were also used as one component of the exit
criteria for completion of behavior service professional training. Behavior service professionals
were required to obtain a mean overall IOA coefficient of at least 80% on three consecutive
caregiver performance data sets for each tool. Table 2 depicts the mean IOA coefficient for each
tool across the three consecutive data sets used for exit criterion for all behavior service
professionals. It should be noted and considered a limitation that the IOA coefficients calculated
for two of the three data sets used for the exit criteria for behavior service professional 104 were
calculated using caregiver data collected by another behavior service professional (102) rather
than a trainer. It should be noted, however, that behavior service professional 102 met exit
criteria with high IOA coefficients prior to collecting the data used in calculating 104’s exit
criteria, thus increasing the confidence in the accuracy of this individual’s data.
Treatment Integrity
Treatment integrity data were used to assess the behavior service professionals’ ability to
deliver the standard role play to the caregivers. All behavior service professionals were required
27
to conduct three role plays at 100% accuracy for each tool. The average number of role plays to
exit criterion was 3.9, 4.0, and 4.2 (range = 3-6) for reinforcement, pivot, and protect/redirect
respectively. Some behavior service professionals initially conducted the standard role plays with
high accuracy and maintained high accuracy with little variability. Other behavior service
professionals initially conducted the role plays with lower or more variable accuracy prior to
meeting exit criterion. Figure 4 illustrates these patterns with representative data from Behavior
Service Professionals 107 and 109. Behavior Service Professional 107’s scores were
representative of initial variable accuracy across all three role plays. During the first set of role
plays accuracy scores for this participant were 75%, 80%, and 100% for reinforcement, pivot,
and protect/redirect respectively. Variability in scores was observed during the first three sets of
role plays. During the fourth set of role plays this participant scored 100% accuracy for all tools.
A slight decrease for reinforcement (75%) and protect/redirect (87.5%) was observed the fifth
time 107 conducted role plays. Subsequently, scores remained at 100%, with the exception of the
pivot tool on the ninth role play which decreased to 60% accuracy. Behavior service professional
109’s scores were representative of a highly accurate and stable pattern of role play scores. Exit
criterion was met in the fewest number of role plays possible, as 100% accuracy was
demonstrated during the first three role plays conducted across all the tools. A slight decrease in
accuracy (80%) was seen during the fourth session with the pivot tool and during the eighth
session with the reinforcement tool; all other scores remained at 100%.
Caregiver Results
Caregiver performances were scored to determine the accuracy with which the tools were
implemented. Figure 5 shows the caregivers’ average pre- and post-test scores for each tool,
summarized across all caregivers trained by the behavior service professionals. The caregiver
28
scores depicted in Figures 5 and 6 were derived from data collected by the primary trainer. An
increase in accuracy of implementation from pre- to post-test was observed for each of the three
tools. The greatest increase in caregiver accuracy was observed for the protect/redirect tool. On
average, caregivers scored lowest on the protect/redirect pre-test (60%), thus providing the
opportunity for the largest amount of change. By contrast, caregivers implemented the
reinforcement tool with higher average accuracy (89%) during the pre-test, creating a ceiling
effect for the amount of change that could occur.
Figure 6 shows the results of caregiver training grouped into four quartiles (0-25%, 26-
50%, 51-75%, and 76-100%) according to pre-test scores. The top panel of Figure 6 depicts the
percentage of caregivers scoring in each quartile for each tool. These outcomes show that, in
general, a “stair step” pattern is apparent across quartile bins, with the fewest participants scoring
in the bottom quartile, increases in the percentage of scores in successive quartiles, and the
highest percentage of scores in the top quartile. Scores for the pivot tool deviated slightly from
this pattern, with fewer caregivers scoring in the second and third quartile relative to the first
quartile. Scores for the protect/redirect tool also deviated slightly from this pattern, with fewer
caregivers scoring in the fourth quartile relative to the third quartile. Fewer scores in the lowest
quartile predicted a greater percentage of scores in the top quartile across tools. For example, the
lowest percentage of pre-test scores (0%) in the bottom quartile was observed for the
reinforcement tool, which was also associated with the highest percentage of pre-test scores in
the top quartile.
The middle panel of Figure 6 depicts the mean post-test score for each tool, grouped
across pre-test score quartiles. These results show that, across tools, the lowest mean post-test
scores were produced by caregivers with the lowest pre-test scores and the highest mean post-test
29
scores were produced by caregivers with the highest pre-test scores. Again, a generalized “stair
step” pattern can be seen across quartiles. Mean post-test scores approached 100% across all
three tools for caregivers whose pre-test scores were in the top quartile.
The bottom panel of Figure 6 depicts the mean percent change on each tool, grouped
across pre-test score quartiles. Percent change for each caregiver was calculated as described
previously and again for the purposes of calculation the four caregivers who scored 0% correct
on the pre-test were assigned a pre-test score of 1%. To accommodate the substantial percent
change seen in the first pre-test score quartile for the pivot tool, without masking the results from
the other pre-test quartiles, the y-axis was broken at 400% and resumes at 1000%. It is important
to note that as pre-test scores increase, the amount of possible percent change from pre- to post-
test decreases (e.g., whereas the percentage increase from 1% to 100% is 9,900%, the percentage
increase from 99% to 100% is 1%). Caregivers with lower pre-test scores (0%) had the largest
percent change from pre- to post-test (1580%). As expected, the mean percent change was lower
for caregivers scoring in the higher pre-test score quartiles. 0% change was observed for the
reinforcement tool in the first quartile as a result of 0 caregivers scoring below 25% on the pre-
test. 0% change was also observed in the fourth quartile for the pivot tool as a result of all
caregivers scoring 100% on pre- and post-test.
30
CHAPTER 4
DISCUSSION
The current project demonstrated the application of evidence-based, competency-based
training on a notable scale in a large, state residential care facility. The results represent the
initial outcomes of a project that included development and refinement of curriculum and
measurement systems used to establish and evaluate a pyramidal training program to train a set
of behavior management skills. Results are presented indicating that 109 caregivers and 11
behavior service professionals demonstrated improved behavior management skills following
training using the curriculum. Furthermore, using the pyramidal training model, 41 of the
caregivers received training from behavior service professionals, and outcomes for those trainees
were consistent with those produced when training was implemented directly by the primary
trainer. The staff training program is currently utilized by the residential care facility to train on
average approximately 50 new caregivers each month.
A traditional experimental design was not utilized in the current project as this project
was not initially intended as behavior-analytic research. The current project originally sought to
apply behavior-analytic evidence-based practice to develop and implement training to meet the
facility’s needs. The impetus behind the current project (i.e., achieving compliance with a United
States Department of Justice requirement) necessitated that training on a large-scale be
implemented from the onset. Although this did not permit an experimental evaluation of the
training package across multiple experimental phases or the use of small-n research strategies
typical of behavior-analytic research, the results are sufficiently robust to support several general
conclusions. First, interobserver agreement measures were useful to develop a reliable
measurement system, to refine some curriculum elements, and to evaluate measurement
31
conducted by behavior service professionals. Second, behavior skills training (BST) resulted in
caregiver acquisition of all three behavior management techniques; an effect that was replicated
across the 109 caregivers. Third, the pyramidal training approach was effective to teach behavior
service professionals to deliver BST and accurately measure the performances of trainees.
The benefits of pyramidal training for training large numbers of staff have been
previously noted in the literature and are particularly evident in the current project. Although
competency-based training is considered best-practice, a limitation is the amount of time
required for its implementation (Parsons, Rollyson, & Reid, 2012). In order for a large, state
facility to effectively utilize competency-based BST procedures a more efficient approach was
needed. The pyramidal approach allowed for the training of exponentially more caregivers than
more typical models of centralized training. Additionally, training the behavior service
professionals allowed the competency-based training to be integrated within the facility’s
existing OJT training, in which each behavior service professional trained new caregivers
individually or in small groups. This integration has facilitated the successful adoption of the
caregiver training program throughout the facility.
Using a standard role play scenario for each tool (conducted by the primary trainer
throughout the refinement process) provided an empirical way to develop and continually refine
the measures utilized in the current project. A standard role play ensured that each caregiver had
an opportunity to demonstrate each component of the three behavior management tools.
Consistent delivery of those opportunities by the primary trainer (and creator of the standard role
plays) helped to ensure that observers’ scoring of the checklist was under the correct stimulus
control (e.g., the observer scored “yes” on the step “ignore junk behavior” if a caregiver turned
away when the trainer yelled and whined). The primary trainer was also the code-writer and
32
provided a standard record used for comparison with other observers. Baer, Wolf, and Risley
(1987) suggested that having the code-writer serve as an observer provides a way to empirically
revise a code until observers’ recording and observing behaviors are under the appropriate
stimulus control. The code-writer’s record was the standard, representing the “true values” of
events that occurred and calculation of interobserver agreement with other observers assessed the
accuracy, validity, and reliability of the measurement systems used in the current project. The
use of a code-writer’s record as a source of comparison is a way to assess accuracy when more
commonly utilized methods (e.g., repeatedly calculating IOA from a video recording until
acceptable levels are achieved) are not available or feasible in the given situation.
A challenge of any staff training program intended for large-scale dissemination is the
development of a measurement system that can be easily acquired and used by new observers.
The current project found that a checklist, scored with pen and paper, was sufficient to detect
caregivers’ acquisition of the behavior management skills. Computerized data collection or
coding video recorded performances have been utilized in the staff training literature; however
those data collection procedures were not feasible for the current project. The cost of computers
or handheld data collection devices, software, and/or video equipment can be a barrier to use on
a large-scale. A checklist data collection system was a cost-efficient way to disseminate to all the
behavior service professionals in the large facility. The behavior service professionals were
easily trained to use the checklists to score caregiver performance. Interobserver agreement
between a trainer and new observers (e.g., behavior service professionals) allowed the trainer to
calibrate new observers’ data collection procedures and easily detect when exit criterion was
met. More sophisticated data collection systems would have required an additional training
component in order to ensure new observers (the behavior service professionals) had the
33
necessary expertise to access and analyze the data. Finally, the use of a checklist provided the
trainers and behavior service professionals the ability to immediately visually inspect the data
and provide feedback to caregivers following the post-test. Data collected by computers or
obtained from coding videos requires additional time outside of the training session to analyze
and delays performance-specific feedback to the caregivers.
The behavior service professional training utilized BST procedures to train the behavior
service professionals to implement caregiver training. Just as it was necessary to empirically
evaluate caregiver competency, it also was necessary to develop measurement systems to
identify the behavior service professionals’ ability to deliver curriculum components and detect
their ability to score caregiver performance. The behavior service professionals were trained to
conduct the three standard role plays developed to assess caregiver performance. The use of a
treatment integrity checklist, comprised of the role play’s components, allowed the trainers to
detect when the behavior service professionals met mastery criterion for conducting the role
plays. The current project demonstrated the use of BST procedures and empirically validated
measurement systems to detect skill acquisition at both levels of a pyramidal training program.
There are several areas of the current project that warrant further consideration. First,
although the current project demonstrated that caregivers could demonstrate targeted skills
during role play scenarios, maintenance of the skills over time and generalization of the skills to
other environments or scenarios were not assessed. To assess maintenance of the behavior
management skills, follow-up probes utilizing the same standard role plays from caregiver
training could be conducted at various points in time following acquisition of the skill.
Similarly, procedures used during training (e.g., calculating IOA coefficients and
treatment integrity) could be used to assess maintenance of the behavior service professionals’
34
ability to conduct the standard role plays and score caregiver performances. If decrements in
performances were observed either by caregivers or behavior service professionals, remedial
training and continued assessment would be indicated.
Generalization was not assessed in the current project. To assess generalization of the
three behavior management tools (reinforcement, pivot, and protect/redirect) in natural
environments it will be necessary to develop a measurement system to identify opportunities to
use the skills. A potential barrier to measuring caregiver’s use of the skills in the natural
environment is the amount of time necessary to observe a response opportunity. Whereas role
play scenarios permit scripted, and therefore frequent, opportunities to demonstrate targeted
skills, such opportunities may occur only rarely in natural environments. This may be
particularly true for dangerous or destructive behaviors needed to assess protect/redirect. The
development of novel role play scenarios, presented at various times following the completion of
initial training, could be used to address this limitation. However, a more definitive
demonstration of competency would be provided by documenting accurate implementation of
procedures with actual service recipients in the service setting.
A component analysis of the behavior service professional training, particularly in regard
to acquisition of the standard role plays, would be helpful. The structure of the current package
did not allow the primary trainer to pinpoint when each of the components of the role plays were
acquired. The behavior service professionals were exposed to the standard role plays at a variety
of times (e.g., when they participated in caregiver training, the written script, verbal descriptions,
the practice component in training session two, while viewing data collection video vignettes,
and during feedback sessions). Due to the scripted nature of the role plays it was not appropriate
to assess baseline without any prior instruction, so it is unclear if any of the behavior service
35
professionals could have demonstrated the role plays prior to training. Further research could
examine which components within behavior service professional training are necessary for the
acquisition of ability to conduct role plays.
Finally, the current results are sufficiently encouraging to suggest dissemination of the
pyramidal BST procedures utilized in the current project at a larger scale. The project was
implemented in one of many large, residential facilities within a common State system.
Currently, there is no common curriculum or set of procedures for training caregivers in behavior
management procedures within this system. Given the positive results of the current project,
particularly in the context of the requirement by the United States Department of Justice that all
staff demonstrate competency in these skills, replication of this system in other facilities seems
appropriate. Future efforts toward such replication and extension could include an expansion of
the pyramidal approach to include training Directors of Behavior Service Departments to
implement the entire training package. Adding this level to the pyramidal structure would permit
“in-house” implementation of the entire program. This would represent not only an example of
the dissemination of evidence-based procedures, but would also result in the generation of
“practice-based evidence” (Maheady, Smith, & Jabot, 2013, p.139-140) as a function of the
systematic measurement component of the system. That is, by teaching those responsible for
training staff members to not only deliver curriculum but to accurately evaluate their trainee’s
ability to implement procedures, evidence of the effectiveness of training can be used to
immediately assess and respond to individual performances as well as for ongoing program
monitoring and improvement. Thus, replication and extension of this project in similar facilities
is encouraged to multiply the demonstrated benefits of the program and to accelerate further
innovation.
Table 1
IOA Coefficients Per Step Test 1 Test 2 Test 3 Test 4 Test 5 Test 6 Test 7 Test 8 Test 9 Test 10 Test 11 Test 12 Test 13 Test 14 Test 15
Reinforcement Step 1 100% 100% 100% 80% 80% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% Step 2 67% 100% 100% 80% 80% 80% 100% 100% 100% 100% 100% 100% 100% 100% 83% Step 3 100% 100% 67% 80% 80% 100% 100% 100% 100% 100% 100% 100% 100% 100% 83% Step 4 100% 100% 100% 80% 80% 100% 100% 100% 100% 100% 100% 100% 100% 100% 83% Step 5 33% 50% 100% 40% 40% 40% 100% 100% 100% 100% 100% 88% 100% 100% 67% Step 6 0% 0% 67% 100% 100% 80% 100% 100% 100% 100% 100% 100% 100% 100% 83%
Pivot Step 1 100% 100% 67% 100% 100% 100% 100% 100% 100% 100% 100% 100% 71% 100% 83% Step 2 100% 75% 100% 100% 80% 80% 100% 100% 100% 100% 88% 100% 100% 100% 100% Step 3 100% 100% 67% 67% 100% 100% 100% 100% 100% 75% 100% 88% 100% 100% 100% Step 4 100% 75% 100% 100% 100% 100% 100% 100% 100% 100% 88% 100% 100% 100% 100%
Redirect/reinforce Step 1 67% 100% 100% 100% 80% 100% 80% 100% 100% Step 2 100% 100% 67% 100% 100% 100% 100% 100% 100% Step 3 67% 100% 67% 67% 100% 100% 60% 100% 100% Step 4 33% 75% 33% 100% 20% 100% 80% 100% 0% Step 5 100% 100% 67% 67% 100% 80% 80% 100% 100% Step 6 67% 100% 67% 100% 100% 80% 80% 100% 100% Step 7 33% 50% 67% 0% 0% 40% 100% 0% 100% Step 8 67% 100% 67% 100% 100% 100% 100% 100% 100%
Protect/redirect Step 1 100% 100% 100% 71% 100% 100% Step 2 100% 100% 100% 57% 100% 100% Step 3 100% 100% 100% 100% 100% 100% Step 4 100% 100% 100% 100% 100% 100% Step 5 100% 100% 88% 100% 100% 100% Step 6 100% 100% 100% 86% 100% 67%
Each score represents the IOA for a given step across the successive administrations of the test. The dashed lines indicate checklist refinement and the asterisks indicate the specific steps refined.
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗
∗ ∗ ∗
36
Table 2
Mean IOA Coefficients Across Exit Criterion Role Plays
101 102 103 104 105 106 107 108 109 110 111 Reinforcement 100% 100% 100% 100% 100% 100% 83% 83% 100% 89% 94% Pivot 100% 100% 100% 83% 92% 92% 100% 100% 100% 83% 92% Protect/redirect 83% 94% 94% 83% 89% 83% 89% 89% 89% 89% 83%
37
Figure 1. Average pre- and post-test scores for all participants (N = 68).
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Reinforcement Pivot Protect/Redirect
Perc
ent c
orre
ct
Pre-test
Post-test
38
Figure 2. Percentage of participants, mean post-test score, and mean percent change per pre-test
quartile.
0%
20%
40%
60%
80%
100%
Perc
enta
ge o
f car
egiv
ers
0%
20%
40%
60%
80%
100%
Mea
n po
st-te
st sc
ore
3000%
4000%
5000%
-20%
0%
20%
40%
60%
80%
100%
0%-25% 26%-50% 51%-75% 76%-100%
Mea
n pe
rcen
t cha
nge
Pre-test score quartiles
Reinforcement
Pivot
Protect/Redirect
39
Figure 3. Pre- and post-test results for Participants 011, 016, 052, 056.
0% 0%0%
20%
40%
60%
80%
100%
Reinforcement Pivot Protect/Redirect
Perc
ent c
orre
ct
011
Pre-test
Post-test
0%
20%
40%
60%
80%
100%
Reinforcement Pivot Protect/Redirect
Perc
ent c
orre
ct
016
0%
20%
40%
60%
80%
100%
Reinforcement Pivot Protect/Redirect
Perc
ent c
orre
ct
052
0%
20%
40%
60%
80%
100%
Reinforcement Pivot Protect/Redirect
Perc
ent c
orre
ct
056
40
Figure 4. Accuracy of implementation across successive implementations of the three standard
role plays for Behavior Service Professionals 107 and 109.
0%
20%
40%
60%
80%
100%
1 2 3 4 5 6 7 8 9 10
Perc
ent c
orre
ct
Role Play
109
Reinforcement
Pivot
Protect/Redirect
0%
20%
40%
60%
80%
100%
1 2 3 4 5 6 7 8 9
Perc
ent c
orre
ct
107
41
Figure 5. Average pre- and post-test scores for caregivers trained by behavior service
professionals (N = 41).
0%
20%
40%
60%
80%
100%
Reinforcement Pivot Protect/Redirect
Perc
ent c
orre
ct
Pre-test
Post-test
42
Figure 6. Percentage of caregivers, mean post-test score, and mean percent change per pre-test
quartile for caregivers trained by behavior service professionals.
0%0%
20%
40%
60%
80%
100%
Mea
n po
st-te
st sc
ore
0%0%
20%
40%
60%
80%
100%
Perc
enta
ge o
f par
ticip
ants
1000%
1200%
1400%
1600%
0% 0%0%
50%
100%
150%
200%
250%
300%
350%
400%
0%-25% 26%-50% 51%-75% 76%-100%
Mea
n pe
rcen
t cha
nge
Pre-test score quartiles
Reinforcement
Pivot
Protect/Redirect
43
44
APPENDIX A
STANDARD ROLE PLAY SCRIPTS
45
OJT Role Play Script:
Tell the OJT “you will be given 3 scenarios and you will have to decide for each scenario which
tool you will use: use reinforcement, pivot, protect/redirect.
use reinforcement:
Tell the OJT: “you are the staff, I am the individual, we are in the apartment and I am completing
my chore of sweeping the floor”
BARC/Psychologist/Trainer:
1. Pretend to sweep the floor for approximately 3-5seconds (without talking).
2. When you are done sweeping the floor say “I’m all done.”
a. If the OJT says “great job” or “do you want to (go for a walk, watch tv, etc” say
yes and end the role play
b. If the OJT places a demand for example “put the broom away” engage in junk
behavior “whine I don’t want to put the broom away/no/I don’t want to” for 3-
5seconds.
i. If the OJT says “ok you don’t have to” end the role play.
ii. If the OJT ignores the junk; pretend to put the broom away then end the
role play
Pivot:
Tell the OJT: “you are the staff, I am the individual, we are at workshop and my workshop task
is to count the items on the table.”
46
BARC/Psychologist/Trainer:
1. Engage in Junk: Say “I don’t like workshop, I don’t want to be here, I don’t like you, you
are a stupid staff” while tossing the workshop items around the table and on the floor. Do
this for about 5-10seconds
2. Engage in Better Behavior: Then begin to count the items on the table and pick up/count
any items on the floor.
3. When you count all the items say “I’m all done”
a. If the OJT provides praise; end the role play
b. If the OJT places another demand or uses a coercive (ex. criticizes) engage in
about 5s of whining/junk behavior and then calm down giving the OJT a chance
to reinforce
i. end the role play when the OJT pivots back and reinforces calm behavior
or after 10s of no responding from the OJT
Protect/Redirect:
Tell the OJT: “you are the staff, I am the individual and we are sitting in the apartment”
BARC/Psychologist/Trainer:
1. Without talking hit your head with an open palm
2. Do not respond to any request made by the OJT to stop or put hands down.
3. Place your hands in lap after 5-10s of head hitting
4. When the OJT talks to you (praise, asks what’s wrong, offers activity) engage in Junk
behavior for about 3-5s (No Talking-stomp your feet, hit the table, wave your hands);
then sit calmly.
47
5. Comply with any request that the OJT makes or answer yes to any question asked
6. End role play
48
APPENDIX B
ROLE PLAY CHECKLISTS
49
OJT Check Off Circle: Pre-Test Post-Test Date:________________________
Use Reinforcement Tool Checklist Staff Name & Apt.: _________________________ Data Collector: _______________________ Scenario: Sweeping the floor______________________________________________________
Step Retraining 1 Retraining 2 Comments
Yes No N/A Yes No N/A Yes No N/A
1. Tell the person what behavior you liked
*The participant states an appropriate behavior that the client is engaging in
2. Provide a consequence for the behavior that matches the value of the behavior
*i.e. Verbal Praise, Break, Walk, Snack
3. Provide the positive consequence within 3-7 seconds of recognizing the appropriate behavior
*Within 3-7 seconds of client sweeping
4. Use sincere and appropriate facial expression, tone of voice and body language
*relaxed body posture, positive affect
5. Say nothing and do nothing about junk behavior throughout the process
*Mark Yes if Ignores “whining” *Mark No if argues with individual
*Mark N/A if Instructor did not engage in Junk
6. Stay cool *Score Yes if uses calm voice & Avoids
coercives *Score No if not calm or uses coercives
*Do not mark N/A
TOTAL SCORE
50
OJT Check Off Circle: Pre-Test Post-Test Date:________________________
Pivot Tool Checklist Staff Name & Apt.: _________________________ Data Collector: _______________________ Scenario: Whining workshop is too hard/throwing workshop materials on the floor
Step Retraining 1 Retraining 2 Comments Yes No N/A Yes No NA Yes No N/A
1. Say nothing about the junk behavior
*Mark No if they talk about the whining, say “don’t do that” or “stop”, talk about
throwing objects
2. Actively attend to another person or activity
*For example mark Yes if looking at paperwork or pretends to engage with
materials or looks/turns away from client
3. Provide reinforcement for the better behavior.
Within 10-seconds of client picking up work materials or working on activity
reinforce i.e. Verbal Praise, Break, Walk, Snack
Mark No if praise is not specific
4. Stay Cool
*Score Yes if uses calm voice & Avoids coercives
*Score No if not calm or uses coercives *Do not mark N/A
TOTAL SCORE
51
OJT Check Off Circle: Pre-Test Post-Test Date:________________________
Protect/Redirect Tool Checklist Staff Name & Apt.: _________________________ Data Collector: _______________________ Scenario: SIB-Hitting head______________________________________________________
Step Retraining 1 Retraining 2 Comments **Must successfully complete any step on 100% of opportunities to score a Yes** Yes No N/A Yes No N/
A Yes No N/A
1. Get within arm’s reach of the
person and physically intervene to stop/block the dangerous behavior within 10s
2. Can say stop/phrase ONCE and
continue blocking without talking
3. Stay Calm and Cool (avoid coercion)
4. Ignore Junk Behavior
5. When you see calm or better behavior, use reinforcement
Either:
a) Wait (block if necessary) b) Offer a new activity
TOTAL SCORE
52
APPENDIX C
TREATMENT INTEGRITY CHECKLIST
53
Psych:___________________________ OJT:______________________________Date:____________________________ Data Collector:_____________________
Yes No N/A Comment Step by Trainee
Yes No N/A Comment Reinforcement
Says "I'm all done" after sweeping for 3-5s
Ends role play-If OJT praises
Ends role play-If OJT offered activity and they comply
Feeback:________________________________________________________________________________________
Yes No N/A Comment Pivot
Says “I’m all done” after you count all the objectsEnds role play-If OJT offers praise
Feeback:________________________________________________________________________________________
Yes No N/A Comment Protect/Redirect
Hits head with an open palm without talking
Doesn't respond to request to stop or put hand downPlaces hand in lap after 5-10s of head hitting
Engages in junk for 3-5s after the OJT praises
Sits calmly
Complies with any offer/activity suggested by OJTEnds role play
Feeback:________________________________________________________________________________________
Tells the OJT “you will be given 3 scenarios and you will have to decide for each scenario which tool you will use: Use Reinforcement, Pivot, Protect/Redirect
Pretends to sweep the floor for approximately 3-5 seconds without talking
Ends role play-after 3-5secs of whining/engageing in junk behavior If OJT places a demand for example “put the broom away”
Tells the OJT: “you are the staff, I am the individual, we are at workshop and my workshop task is to count the items on the table.”Says “I don’t like workshop, I don’t want to be here, I don’t like you, you are a stupid staff’ while tossing the workshop items around the table and on the floor for about 5-10 seconds
Begins to count the items on the table and pick up/count any items on the floor
Ends Role play-after 5-10s of junk behavior is the OJT uses a coercive or attends to the junk behavior
Tells the OJT: “you are the staff, I am the individual and we are sitting in the apartment”
Tells the OJT: “you are the staff, I am the individual, we are in the apartment and I am completing my chore of sweeping the floor”
54
APPENDIX D
BEHAVIOR SERVICE PROFESSIONAL TRAINING MATERIALS PACKET
55
Training the OJT Competency-Based Training Package
• PBMS II*• OJT*
• *Participate in class; know & demonstrate tools
Pre Requisites
•••••
Training
• TI:• 100% correct of overall role play script; across 3 role plays
• IOA w/BARC• 80% average overall agreement; across 3 consecutive role plays
Exit Criteria
Part 1 • Learn the OJT Process• Practice collecting Role Play data (used training videos)
Part 2 • Practice Role Plays• Practice talking through video examples
Part 3 • Run OJT supervised by BARC
56
New OJT Process
Day 1
Role Play Pre-tests (Reinforcement, Pivot,
Protect/Redirect)
Watch 3 video examples for each tool & discuss steps
Discuss data collection procedures
Watch data collection videos and practice taking data
Day 2
Role play Post-tests (Reinforcement, Pivot,
Protect/Redirect)
Feedback after Post-Test
Data collection check off
Info on PBSPs; receive PBSPs & quizzes for their apartment
Retraining on tools if needed
Day 3
Return completed PBSP quizzes
Ask questions
Sign inservice sheets & OJT packets
57
OJT Schedule:
OJT Day 1:
Materials:
1. Primary Data Collector: 1 copy of “OJT Role Play Check Offs v7” for each OJT 2. IOA Data Collector: 1 copy of “OJT Role Play Check Offs v7” for each OJT 3. For each OJT a copy of “OJT Data collection practice” 4. For each OJT a copy of “How to collect data” 5. Role play and Data collection videos
Set Expectations for OJT:
Tell the OJTs the following:
• The rest of the dates for OJT are Insert day 2 date & time and Insert day 3 date & time.
• Today: we will have you do 3 role plays (similar to those at the end of PBMS day 2 from orientation) to see what you remember; you will watch 3 video examples for each of the three tools; and you will practice data collection and watch 3 different videos to practice data collection.
• Next Meeting: You’ll check off on 3 role plays for the 3 different tools and check off on how to do data collection. You will be given the PBSPs and matching quizzes for all of the individuals on your home and one other home. You will work on the quizzes, ask questions, and then take them with you to finish during free time in the apartments during OJT.
• Final day (Check off day): You will bring your completed quizzes with you on Insert due date. They will be checked and discussed and then you will be checked off.
Pre-Test Role Plays:
• Individually conduct a role play with each OJT for each of the three tools (Use Role Play Script); using the checklist provided to score Yes, No, or N/A for each step.
o Use Reinforcement: You are the client and you are sweeping the floor o Pivot: You are the client and begin to yell, whine, and throw small items on
the floor while at workshop o Protect/Redirect: You are the client and you begin to engage in head hitting.
• Do not provide feedback during or after these role plays. Role Play Videos:
• The OJTs will now watch 3 video examples of each tool as extra examples. • Discuss the steps of each tool for each video. (See video example talking points) • If needed pause the videos and discuss the steps so the OJTs can hear. • Ask for them to respond back to you at different points in the video.
Data Collection Practice:
58
• Discuss the different ways that data is collected. (use “How to Collect Data”) • The OJTs will watch three data collection videos. One focusing on frequency
recording, one focusing on duration recording, and one focusing on interval recording. Use the “OJT Data Collection Practice” worksheet.
• Discuss any questions they had regarding data collection.
OJT Day 2:
You will need:
1. Primary Data Collector: 1 copy of “OJT Role Play Check Offs v7” for each OJT 2. IOA Data Collector: 1 copy of “OJT Role Play Check Offs v7” for each OJT 3. For each OJT a copy of “OJT Data collection check off” 4. Data collection videos 5. 1 OJT PBSP Quiz Packet for each OJT
Post-Test Role Play:
• Individually conduct a role play with each OJT for each of the three tools (Use Role Play Script); using the checklist provided to score Yes, No, or N/A for each step.
o Use Reinforcement: You are the client and you are sweeping the floor o Pivot: You are the client and begin to yell, whine, and throw small items on
the floor while at workshop o Protect/Redirect: You are the client and you begin to engage in head hitting.
• For each OJT after their Role Play give feedback on steps they did correctly and incorrectly.
Data Collection Check Off:
• Ask the OJTs if they have any questions • Give the OJT the “OJT Data Collection Check off” worksheet. • Tell them to independently decide what target behavior to watch, what kind of data to
take, and then take that kind of data while watching the video. • Have the OJTs watch Data Collection Video 4 and take data. • Collect the sheets and then discuss what kind of data they were supposed to take.
PBSP Quizzes:
• Give the OJTs the PBSP Staff Instructions & Quizzes for the individuals on their home and another home. Remind them that they are due completed on Insert due date. If they are not done they will not be able to be checked off or have their final OJT meeting until they are completed.
• Describe what is included in the Staff Instructions and how they complete the quizzes.
• For the remainder of the time they are to work on the quizzes asking you any questions that may arise.
59
OJT Day 3:
You will need:
1. OJT Inservice Sheets PBSP Quizzes:
• Check the PBSP Quizzes for completion • Check the PBSP Quizzes for correct answers • Answer any questions that the OJTs may have had • Discuss the PBSPs of individuals specific to your caseload that the OJTs need to
know about. • Sign off on OJT packet. • Have them Sign OJT Inservice Sheets
OJT Role Play Script:
Tell the OJT “you will be given 3 scenarios and you will have to decide for each scenario which
tool you will use: Use Reinforcement, Pivot, Protect/Redirect.
Use Reinforcement:
Tell the OJT: “you are the staff, I am the individual, we are in the apartment and I am completing
my chore of sweeping the floor”
BARC/Psychologist/Trainer:
3. Pretend to sweep the floor for approximately 3-5seconds (without talking).
4. When you are done sweeping the floor say “I’m all done.”
a. If the OJT says “great job” or “do you want to (go for a walk, watch tv, etc” say
yes and end the role play
60
b. If the OJT places a demand for example “put the broom away” engage in junk
behavior “whine I don’t want to put the broom away/no/I don’t want to” for 3-
5seconds.
i. If the OJT says “ok you don’t have to” end the role play.
ii. If the OJT ignores the junk; pretend to put the broom away then end the
role play
Pivot:
Tell the OJT: “you are the staff, I am the individual, we are at workshop and my workshop task
is to count the items on the table.”
BARC/Psychologist/Trainer:
4. Engage in Junk: Say “I don’t like workshop, I don’t want to be here, I don’t like you, you
are a stupid staff” while tossing the workshop items around the table and on the floor. Do
this for about 5-10seconds
5. Engage in Better Behavior: Then begin to count the items on the table and pick up/count
any items on the floor.
6. When you count all the items say “I’m all done”
a. If the OJT provides praise; end the role play
b. If the OJT places another demand or uses a coercive (ex. criticizes) engage in
about 5s of whining/junk behavior and then calm down giving the OJT a chance
to reinforce
i. end the role play when the OJT pivots back and reinforces calm behavior
or after 10s of no responding from the OJT
61
Protect/Redirect:
Tell the OJT: “you are the staff, I am the individual and we are sitting in the apartment”
BARC/Psychologist/Trainer:
7. Without talking hit your head with an open palm
8. Do not respond to any request made by the OJT to stop or put hands down.
9. Place your hands in lap after 5-10s of head hitting
10. When the OJT talks to you (praise, asks what’s wrong, offers activity) engage in Junk
behavior for about 3-5s (No Talking-stomp your feet, hit the table, wave your hands);
then sit calmly.
11. Comply with any request that the OJT makes or answer yes to any question asked
12. End role play
Video Example Talking Points Script
While watching the following 9 videos, describe the steps of the tool to the OJTs.
Use Reinforcement 1: Individual is emptying the dishwasher
• In this example the staff is specific, tells the person what behavior they liked, “nice job
putting the dishes away” & “thank you for asking”.
• Staff provides a consequence that matches in value. Verbal praise and a high-five
matches putting away dishes.
• Staff provides the praise within 3-7 seconds of recognizing the good behavior or the
behavior they liked.
• Staff was sincere and enthusiastic with their praise and body language.
62
• There was no junk behavior in this example, but if there had been staff should pivot
(ignore it).
• And the whole time staff stayed calm, didn’t use any coercives.
Use Reinforcement 2: Staff ask individual to set table
• In this example the staff is specific, tells the person what behavior they liked, “putting the
glasses in the right spot” & “nice job trying hard to help”.
• Staff provides a consequence that matches in value. Verbal praise matches in value.
• Staff provides the praise within 3-7 seconds of recognizing the good behavior or the
behavior they liked.
• Staff was sincere and enthusiastic with their praise and body language.
• There was no junk behavior in this example, but if there had been staff should pivot
(ignore it).
• And the whole time staff stayed calm, didn’t use any coercives.
Use Reinforcement 3: Staff sitting with calm individual while hyper individual is bothering him
• ASK: in this example what type of behavior is running around saying “hey” “hey” (Junk)
• So since it is Junk behavior the staff are going to pivot away from it, they are going to
ignore it.
• When the individual says “he’s bothering me” that’s good behavior we like and want to
reinforce.
• So staff are specific and say “thanks for telling me”.
• Staff provided verbal praise which matches in value, and offers a different activity.
• Staff provided this within 3-7s of seeing the appropriate behavior.
63
• The whole time staff used sincere facial expressions and tone of voice, ignored junk
behavior, and stayed cool.
Pivot 1: Staff sitting with 2 individuals, one playing nicely and one yelling
• ASK: what type of behavior is Joe engaging in (the yelling): Junk
• Staff say nothing about the Junk behavior.
• Step 2 is actively attend to another person, activity or behavior, in this case Staff are
pivoting to their paperwork and waiting for better behavior.
• When Joe is calm (about 30seconds in) ASK: is sitting quietly better behavior? : Yes
• Staff pivot back and provide reinforcement for this better behavior. Remember to always
be specific when you provide reinforcement.
• After the praise & Joe yells ASK: what type of behavior is that? : Junk
• So since he engages in more Junk behavior staff pivots away back to another
person/activity/behavior; this time staff pivots to the other individual and provides
specific praise for their behavior.
• Waiting for better behavior, when Joe says “I don’t like him” staff give specific praise
“thanks for telling me”.
• And the whole time staff remained calm and avoided coercives.
Pivot 2: Individual is working at workshop with staff
• ASK: What type of behavior is whining?: Junk
• Staff say nothing about the Junk behavior.
• Staff look away, she’s looking at her water bottle and watch.
• When she has better behavior staff pivot back and provide specific reinforcement “nice
job trying”.
64
• Individual has more junk behavior so staff pivots away again. Remember you may have
to pivot away multiple times.
• When she has better behavior staff provides specific reinforcement “nice job picking up
the pieces.
• And the whole time staff remained calm and avoided coercives.
Pivot 3: Individual is banging the door while staff folds laundry
• ASK: what type of behavior is that (banging)? : Junk
• Staff says nothing about junk behavior and actively attends to another activity by folding
the laundry. They are waiting for better behavior.
• ASK: is this better behavior (when Matt starts folding laundry)? : Yes
• Staff provides reinforcement for the better behavior.
• And the whole time staff remained calm and avoided coercives.
Protect/Redirect 1: Individual hitting head
• ASK: What type of behavior is that? : Dangerous
• So Staff is going to get within arm’s reach and say stop once; remember this is dangerous
behavior so we want to get there quickly.
• Step 2 of protect/redirect is to block without talking. This is the protect part. Staff are
using their hands to block the hits and the whole time she is not talking.
• Once Gloria calmed down Staff used reinforcement for that better calm behavior.
Remember we want to be specific and tell them what behavior we liked so she said
“Thanks for calming down Gloria”.
65
• We also want to make sure we provide reinforcement before we offer any new activities,
this helps us make sure the person is calm and that we aren’t providing preferred
activities for junk or dangerous behavior.
• ASK: After reinforcement was Gloria calm or did she engage in Junk behavior? : Junk
• Remember we want to ignore junk so Staff pivot away and wait for better behavior.
• When the individual engages in better behavior or calm behavior the staff pivots back and
uses reinforcement again. Remember we want to specifically reinforce EVERY time we
see better behavior or behavior we like!
• Since Gloria stayed calm after reinforcement staff can now offer a new activity.
Protect/Redirect 2: Individual engaging in aggression towards property (throwing items at
staff)
• In this example staff tell the individual to stop and gets more junk behavior so the staff
pivot away.
• While the staff is pivoting away the individual starts walking to the staff to hit them,
ASK: is this dangerous or junk now? : dangerous
• Pause video for this point: For behavior like aggression toward staff or aggression toward
property the protect part of protect/redirect is for staff to move out of the way (or
objects/furniture/physical hits) and block the person from throwing anything that could
hurt themselves. Staying out of the way, waiting, making sure they are still safe and that
no other individuals walk in and get hurt.
• That’s what the staff is doing in this video, moving out of the way but making sure the
individual is safe. The whole time staff is not talking and not staring at the person.
• Once the person calms down Staff provides specific reinforcement for the calm behavior.
66
• The individual engages in more junk behavior so staff wait and pivot away.
• When the person is calm again Staff provide reinforcement again ASK: what is important
about our praise? : it’s specific
• Since the individual was receptive to the praise staff offered an a new activity.
Protect/Redirect 3: there are 2 individuals and 2 staff, one of the individuals begins engaging in
attempted PAO
• Individual starts off asking staff for snacks, then she begins grabbing staff. Staff say stop
one time and then block without talking.
• The staff are not attending to any of the junk behavior.
• When the individual that is being aggressive goes over to the other individual staff get in
between and block, they are doing this without talking. They are waiting for better
behavior.
• In this example you have another staff present that can help move the other individual out
of the way.
• After the other individual leaves, staff continue to ignore the junk behavior and wait for
better behavior.
• When the individual calms down and asks for a walk, staff provides reinforcement by
taking them for a walk.
• The whole time the stay remained stayed cool and calm, avoiding coercives.
67
Use Reinforcement Tool Checklist Use Reinforcement Video 1 Scenario: Sweeping the floor______________________________________________________
Step Retraining 1 Retraining 2 Comments
Yes No N/A Yes No N/A Yes No N/A
1. Tell the person what behavior you liked
*The participant states an appropriate behavior that the client is engaging in
2. Provide a consequence for the behavior that matches the value of the behavior
*i.e. Verbal Praise, Break, Walk, Snack
3. Provide the positive consequence within 3-7 seconds of recognizing the appropriate behavior
*Within 3-7 seconds of client sweeping
4. Use sincere and appropriate facial expression, tone of voice and body language
*relaxed body posture, positive affect
5. Say nothing and do nothing about junk behavior throughout the process
*Mark Yes if Ignores “whining” *Mark No if argues with individual
*Mark N/A if Instructor did not engage in Junk
6. Stay cool *Score Yes if uses calm voice & Avoids
coercives *Score No if not calm or uses coercives
*Do not mark N/A
TOTAL SCORE
68
Use Reinforcement Tool Checklist Use Reinforcement Video 2 Scenario: Sweeping the floor______________________________________________________
Step Retraining 1 Retraining 2 Comments
Yes No N/A Yes No N/A Yes No N/A
1. Tell the person what behavior you liked
*The participant states an appropriate behavior that the client is engaging in
2. Provide a consequence for the behavior that matches the value of the behavior
*i.e. Verbal Praise, Break, Walk, Snack
3. Provide the positive consequence within 3-7 seconds of recognizing the appropriate behavior
*Within 3-7 seconds of client sweeping
4. Use sincere and appropriate facial expression, tone of voice and body language
*relaxed body posture, positive affect
5. Say nothing and do nothing about junk behavior throughout the process
*Mark Yes if Ignores “whining” *Mark No if argues with individual
*Mark N/A if Instructor did not engage in Junk
6. Stay cool *Score Yes if uses calm voice & Avoids
coercives *Score No if not calm or uses coercives
*Do not mark N/A
TOTAL SCORE
69
Pivot Tool Checklist Pivot Video 1 Scenario: Whining workshop is too hard/throwing workshop materials on the floor
Step Retraining 1 Retraining 2 Comments Yes No N/A Yes No NA Yes No N/A
1. Say nothing about the junk behavior
*Mark No if they talk about the whining, say “don’t do that” or “stop”, talk about
throwing objects
2. Actively attend to another person or activity
*For example mark Yes if looking at paperwork or pretend to engage with
materials or looks/turns away from client
3. Provide reinforcement for the better behavior.
Within 10-seconds of client picking up work materials or working on activity
reinforce i.e. Verbal Praise, Break, Walk, Snack
Mark No if praise is not specific
4. Stay Cool
*Score Yes if uses calm voice & Avoids coercives
*Score No if not calm or uses coercives *Do not mark N/A
TOTAL SCORE
70
Pivot Tool Checklist Pivot Video 2 Scenario: Whining workshop is too hard/throwing workshop materials on the floor
Step Retraining 1 Retraining 2 Comments Yes No N/A Yes No NA Yes No N/A
1. Say nothing about the junk behavior
*Mark No if they talk about the whining, say “don’t do that” or “stop”, talk about
throwing objects
2. Actively attend to another person or activity
*For example mark Yes if looking at paperwork or pretend to engage with
materials or looks/turns away from client
3. Provide reinforcement for the better behavior.
Within 10-seconds of client picking up work materials or working on activity
reinforce i.e. Verbal Praise, Break, Walk, Snack
Mark No if praise is not specific
4. Stay Cool
*Score Yes if uses calm voice & Avoids coercives
*Score No if not calm or uses coercives *Do not mark N/A
TOTAL SCORE
71
Protect/Redirect Tool Checklist Protect/Redirect Video 1 Scenario: SIB-Hitting head______________________________________________________
Step Retraining 1 Retraining 2 Comments **Must successfully complete any step on 100% of opportunities to score a Yes** Yes No N/A Yes No N/
A Yes No N/A
1. Get within arm’s reach of the
person and physically intervene to stop/block the dangerous behavior within 10s
2. Can say stop/phrase ONCE and
continue blocking without talking
3. Stay Calm and Cool (avoid coercion)
4. Ignore Junk Behavior
5. When you see calm or better behavior, use reinforcement
Either:
a) Wait (block if necessary) b) Offer a new activity
TOTAL SCORE
72
Protect/Redirect Tool Checklist Protect/Redirect Video 2 Scenario: SIB-Hitting head______________________________________________________
Step Retraining 1 Retraining 2 Comments **Must successfully complete any step on 100% of opportunities to score a Yes** Yes No N/A Yes No N/
A Yes No N/A
1. Get within arm’s reach of the
person and physically intervene to stop/block the dangerous behavior within 10s
2. Can say stop/phrase ONCE and
continue blocking without talking
3. Stay Calm and Cool (avoid coercion)
4. Ignore Junk Behavior
5. When you see calm or better behavior, use reinforcement
Either:
a) Wait (block if necessary) b) Offer a new activity
TOTAL SCORE
73
Protect/Redirect Tool Checklist Protect/Redirect Video 3 Scenario: SIB-Hitting head______________________________________________________
Step Retraining 1 Retraining 2 Comments **Must successfully complete any step on 100% of opportunities to score a Yes** Yes No N/A Yes No N/
A Yes No N/A
1. Get within arm’s reach of the
person and physically intervene to stop/block the dangerous behavior within 10s
2. Can say stop/phrase ONCE and
continue blocking without talking
3. Stay Calm and Cool (avoid coercion)
4. Ignore Junk Behavior
5. When you see calm or better behavior, use reinforcement
Either:
a) Wait (block if necessary) b) Offer a new activity
TOTAL SCORE
74
How to Collect Data
FREQUENCY DATA:
Data for BEHAVIOR 1 will be documented as FREQUENCY on the Behavior Data Sheet.
You will place a tally mark for each time Behavior 1 occurred during each hour on your shift.
During hours that no target behaviors occur document a Zero; if hour intervals are left blank this
will mean “staff did not take data”.
EXAMPLE: if Person engaged in four instances of Behavior 1 at 10:17am it would look like
this:
6-2 SHIFT 6AM 7AM 8AM 9AM 10AM 11AM 12PM 1PM
If you have any questions
regarding a RB or TB pls. refer to
the PBSP.
OBSERVER'S
NAMES
(F) Behavior 1 0 0 0 0 I I I I 0 0 0 Who was there? What Happened? Mr. Ed
INTERVAL DATA:
Data for Behavior 2 and Behavior 3 will be documented by INTERVALS on the Behavior
Data Sheet. You will put a check mark in the time period in which the behavior occurs
(regardless of how many times it occurs). During hours that no target behaviors occur document
a Zero; if hour intervals are left blank this will mean “staff did not take data”.
EXAMPLE: if Person engaged in Behavior 2 at 6:15am, 6:20am, and 6:27am you will place a
check mark in the 6am interval. If Person engages in Behavior 3 at 7:05am, 7:22am, and 7:45am
you would place a check mark in the 7am interval. It would look like this:
6-2 SHIFT 6AM 7AM 8AM 9AM 10AM 11AM 12PM 1PM If you have any OBSERVER'S
75
questions regarding a
RB or TB pls. refer to
the PBSP.
NAMES
(I) Behavior 2 √ 0 0 0 0 0 0 0 Who was there?
What Happened? Mr. Ed
(I) Behavior 3 0 √ 0 0 0 0 0 0 Who was there? What
Happened? Mr. Ed
DURATION DATA:
Data for Behavior 4 and Behavior 5 will be documented in Duration on the Behavior Data
Sheet. You will put the number of minutes PERSON engaged in Behavior 4 and Behavior 5
during each hour of your shift rounded UP to the nearest minute. During hours that no target
behaviors occur document a Zero; if hour intervals are left blank this will mean “staff did not
take data”.
EXAMPLE: if he engaged in Behavior 4 for 35 minutes during the 10:00am hour interval and
Behavior 5 for 9 minutes and 27 seconds then it would look like this:
6-2 SHIFT 6AM 7AM 8AM 9AM 10AM 11AM 12PM 1PM
If you have any questions
regarding a RB or TB pls. refer
to the PBSP.
OBSERVER'S
NAMES
(D) Behavior 4 0 0 0 0 35 0 0 0 What was going on? Mr. Ed
(D) Behavior 5 0 0 0 0 10 0 0 0 Mr. Ed
76
DATE
6AM 7AM 8AM 9AM 10AM 11AM 12PM 1PM If you have any questions regarding a RB or TB pls. refer to their PBSP.
OBSERVER'S NAMES
0 √ 0 √ 0 √ √ 0
√ 0 0 √ 0 0 0 0
I I I I 0 0 0 0 I 0 0 Who Was Hit? Was Injury Report Done?Hit peer #1111; peer took soda Staff Name
0 0 0 0 0 0 0 0
15 0 0 0 0 10 0 0(Write in number of minutes in each hour)
25 0 0 0 0 0 0 0(Write in number of minutes in each hour)
( I ) = Interval: A single check mark, if it occurs during the hour regardless of how many times. (e.g. 1, 5, 15 behaviors all = √ )
(F) - Stealing(D) - VDB
(D) - AGP
6-2 SHIFTReplacement Behaviors for Increase
(I) - Appropriate RequestsTargeted Behaviors for Decrease
(I) - PAO STAFF (F) - PAO CLIENT
PAO: Hitting, kicking, grabbing, biting, hitting others with objects such as his belt, or throwing objects at others
PAO Client to Client: Hitting, kicking, grabbing, biting, hitting others with objects such as his belt, or throwing objects at his peers.
VDB: Includes but not limited to, yelling, cursing, screaming and threatening others
Stealing: Taking other's possessions or food/drink items, such as sodas, pens, cups or money
AGP: Hitting walls or furniture, slamming doors, throwing items or furniture or any other behavior that could cause property damage
Instructions on How to take Data:(F) = Frequency: A tally mark for how many times behavior occurred during each hour on your shift. (e.g. four behaviors = I I I I )
(D) = Duration: Put the number of minutes the behavior occurred during each hour of your shift rounded to the nearest minute. (e.g. 1m 30s = 2m)Put a zero for each hour behavior did not occur. (e.g. no behavior = 0 )
Behavior Definition Quick Reference
Appropriate Requests: Calmly requesting an item (i.e. asking "Can I have ______?")
BEHAVIOR DATA SHEET Date LOCATION apartment CLIENT NAME person served
77
Name:_____________________________ Date:_________________ OJT Data Collection Practice
Watch the three 10min videos and collect data
Instructions on How to Collect Data
(F) = Frequency: A tally mark for how many times behavior occurred during each hour on your
shift. (e.g. four behaviors = I I I I )
(D) = Duration: Put the number of minutes the behavior occurred during each hour of your shift
rounded up to the nearest minute. (e.g. 1m 30s = 2m)
( I ) = Interval: Put a check mark, if the behavior occurs during the hour regardless of how
many times. (e.g. 1, 5, 15 behaviors all = √ )
Put a zero for each hour behavior did not occur. (e.g. no behavior = 0 )
Behavior Definition Quick Reference
Saying No: Any time the client says the word "no"
Throwing Cards: Throwing or pushing cards away from her in a fast motion.
Skin Pinching: Pinching skin between finger and thumb
Head Hitting-an episode begins when the palm of the hand touches the face or head and ends
after 30 seconds without the hand contacting the head or face.
Data Collection Video 1:
12PM 1PM 2pm 3pm If you have any questions regarding a RB
or TB pls. refer to the PBSP.
OBSERVER'S
NAMES
(F) Saying No
(F) Throwing Cards
78
Data Collection Video 2:
12PM 1PM 2pm 3pm If you have any questions regarding a RB
or TB pls. refer to the PBSP.
OBSERVER'S
NAMES
(I) Skin Pinching
Data Collection Video 4:
12PM 1PM 2pm 3pm If you have any questions regarding a RB
or TB pls. refer to the PBSP.
OBSERVER'S
NAMES
(D) Head Hitting
Name:_____________________________ Date:_________________
OJT Data Collection Check-Off
Watch the 10min video and collect data
Instructions on How to Collect Data
(F) = Frequency: A tally mark for how many times behavior occurred during each hour on your
shift. (e.g. four behaviors = I I I I )
(D) = Duration: Put the number of minutes the behavior occurred during each hour of your shift
rounded up to the nearest minute. (e.g. 1m 30s = 2m)
( I ) = Interval: Put a check mark, if the behavior occurs during the hour regardless of how
many times. (e.g. 1, 5, 15 behaviors all = √ )
Put a zero for each hour behavior did not occur. (e.g. no behavior = 0 )
Behavior Definition Quick Reference
79
Hand Flapping- An episode begins when the hands start to move up and down or side to side
and ends when the movement stops
Data Collection Video 4:
12PM If you have any questions regarding a
RB or TB pls. refer to the PBSP.
OBSERVER'S
NAMES
(F) Hand Flapping
80
OJT Check Off Circle: Pre-Test Post-Test Date:________________________
Use Reinforcement Tool Checklist Staff Name & Apt.: _________________________ Data Collector: _______________________ Scenario: Sweeping the floor______________________________________________________
Step Retraining 1 Retraining 2 Comments
Yes No N/A Yes No N/A Yes No N/A
1. Tell the person what behavior you liked
*The participant states an appropriate behavior that the client is engaging in
2. Provide a consequence for the behavior that matches the value of the behavior
*i.e. Verbal Praise, Break, Walk, Snack
3. Provide the positive consequence within 3-7 seconds of recognizing the appropriate behavior
*Within 3-7 seconds of client sweeping
4. Use sincere and appropriate facial expression, tone of voice and body language
*relaxed body posture, positive affect
5. Say nothing and do nothing about junk behavior throughout the process
*Mark Yes if Ignores “whining” *Mark No if argues with individual
*Mark N/A if Instructor did not engage in Junk
6. Stay cool *Score Yes if uses calm voice & Avoids
coercives *Score No if not calm or uses coercives
*Do not mark N/A
TOTAL SCORE
81
OJT Check Off Circle: Pre-Test Post-Test Date:________________________
Pivot Tool Checklist Staff Name & Apt.: _________________________ Data Collector: _______________________ Scenario: Whining workshop is too hard/throwing workshop materials on the floor
Step Retraining 1 Retraining 2 Comments Yes No N/A Yes No NA Yes No N/A
1. Say nothing about the junk behavior
*Mark No if they talk about the whining, say “don’t do that” or “stop”, talk about
throwing objects
2. Actively attend to another person or activity
*For example mark Yes if looking at paperwork or pretends to engage with
materials or looks/turns away from client
3. Provide reinforcement for the better behavior.
Within 10-seconds of client picking up work materials or working on activity
reinforce i.e. Verbal Praise, Break, Walk, Snack
Mark No if praise is not specific
4. Stay Cool
*Score Yes if uses calm voice & Avoids coercives
*Score No if not calm or uses coercives *Do not mark N/A
TOTAL SCORE
82
OJT Check Off Circle: Pre-Test Post-Test Date:________________________
Protect/Redirect Tool Checklist Staff Name & Apt.: _________________________ Data Collector: _______________________ Scenario: SIB-Hitting head______________________________________________________
Step Retraining 1 Retraining 2 Comments **Must successfully complete any step on 100% of opportunities to score a Yes** Yes No N/A Yes No N/
A Yes No N/A
1. Get within arm’s reach of the
person and physically intervene to stop/block the dangerous behavior within 10s
2. Can say stop/phrase ONCE and
continue blocking without talking
3. Stay Calm and Cool (avoid coercion)
4. Ignore Junk Behavior
5. When you see calm or better behavior, use reinforcement
Either:
a) Wait (block if necessary) b) Offer a new activity
TOTAL SCORE
83
REFERENCES
Baer, D. M., Wolf, M. W., & Risley, T. R. (1987). Some still-current dimensions of applied
behavior analysis. Journal of Applied Behavior Analysis, 20, 313-327.
Berard, K. P., & Smith, R. G. (2008). Evaluating a positive parenting curriculum package: An
analysis of the acquisition of key skills. Research on Social Work Practice, 18, 442-452.
Catania, C. M., Almeida, D., Liu-Constant, B., & Reed, F. D. D. (2009). Video modeling to train
staff to implement discrete trial instruction. Journal of Applied Behavior Analysis, 42,
387-392.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Upper
Saddle River, New Jersey: Pearson Education, Inc.
Gardner, J. M. (1972). Teaching behavior modification to nonprofessionals. Journal of Applied
Behavior Analysis, 5, 517-521.
Homlitas, C., Rosales, R., & Candel, L. (2014). A further evaluation of behavioral skills training
for implementation of the picture exchange communication system. Journal of Applied
Behavior Analysis, 47, 198-203.
Iwata, B. A., Wallace, M. D., Kahng, S., Lindberg, J. S., Roscoe, E. M., Conners, J., Hanley, G.
P., Thompson, R. H., & Worsdell, A. S. (2000). Skill acquisition in the implementation of
functional analysis methodology. Journal of Applied Behavior Analysis, 33, 181-194.
Johnston, J. M., & Pennypacker, H. S. (2009). Strategies and tactics of behavioral research (3rd
ed.). New York, New York: Routledge.
Kahng, S., Ingvarsson, E. T., Quigg, A. M., Seckinger, K. E., & Teichman, H. M. (2011).
Defining and measuring behavior. In W. W. Fisher, C. C. Piazza, & H. S. Roane (Eds.),
Handbook of applied behavior analysis (pp.113-131). New York: The Guilford Press.
84
Kazdin, A. E. (1977). Artifact, bias, and complexity of assessment: The abcs of reliability.
Journal of Applied Behavior Analysis, 10, 141-150.
Kuhn, S. A. C., Lerman, D. C., & Vorndran, C. M. (2003). Pyramidal training for families of
children with problem behavior. Journal of Applied Behavior Analysis, 36, 77-88.
Latham, G. I. (1994). The power of positive parenting. Logan, Utah: P & T Ink.
Lavie, T., & Sturmey, P. (2002). Training staff to conduct a paired-stimulus preference
assessment. Journal of Applied Behavior Analysis, 35, 209-211.
Macurik, K. M., O’Kane, N. P., Malanga, P., & Reid, D. H. (2008). Video training of support
staff in intervention plans for challenging behavior: Comparison with live training.
Behavioral Interventions, 23, 143-163.
Maheady, L., Smith, C., & Jabot, M. (2013). Utilizing evidence-based practice in teacher
preparation. In B. Cook, M. Tankersley, & T. Landrum (Eds.), Evidence-based practice
(pp. 121-148). UK: Emerald Group Publishing Limited.
Miles, N. I., & Wilder, D. A. (2009). The effects of behavioral skills training on caregiver
implementation of guided compliance. Journal of Applied Behavior Analysis, 42, 405-
410.
Moore, J. W., & Fisher, W. W. (2007). The effects of videotape modeling on staff acquisition of
functional analysis methodology. Journal of Applied Behavior Analysis, 40, 197-202.
Neef, N. A., Trachtenberg, S., Loeb, J., & Sterner, K. (1991). Video-based training of respite
care providers: An interactional analysis of presentation format. Journal of Applied
Behavior Analysis, 24, 473-486.
85
Nielsen, D., Sigurdsson, S. O., & Austin, J. (2009). Preventing back injuries in hospital settings:
The effects of video modeling on safe patient lifting by nurses. Journal of Applied
Behavior Analysis, 42, 551-561.
Page, T. J., Iwata, B. A., & Reid, D. H. (1982). Pyramidal training: A large-scale application
with institutional staff. Journal of Applied Behavior Analysis, 15, 335-351.
Parson, M. B., Rollyson, J. H., & Reid, D. H. (2012). Evidence-based staff training: A guide for
practitioners. Behavior Analysis in Practice, 5, 2-11.
Parsons, M. B., Cash, V. B., & Reid, D. H. (1987). Improving residential treatment services:
Implementation and norm-referenced evaluation of a comprehensive management
system. Journal of Applied Behavior Analysis, 22, 143-156.
Parsons, M. B., & Reid, D. H. (1995). Training residential supervisors to provide feedback for
maintaining staff teaching skills with people who have severe disabilities. Journal of
Applied Behavior Analysis, 28, 317–322.
Pence, S. T., St. Peter, C. C., & Tetreault, A. S. (2012). Increasing accurate preference
assessment implementation through pyramidal training. Journal of Applied Behavior
Analysis, 45, 345-359.
Reid, D. H., Parsons, M. B., & Green, C. W. (2012). The supervisor’s guidebook: Evidence-
based strategies for promoting work quality and enjoyment among human service staff.
Morganton, North Carolina: Habilitative Management Consultants Inc.
Repp, A. C., Deitz, D. E. D., Boles, S. M., Deitz, S. M., & Repp, C. F. (1976). Differences
among common methods for calculating Interobserver agreement. Journal of Applied
Behavior Analysis, 9, 109-113.
86
Rosales, R., Stone, K., & Rehfeldt, R. A. (2009). The effects of behavioral skills training on
implementation of the picture exchange communication system. Journal of Applied
Behavior Analysis, 42, 541-549.
Sarokoff, R. A., & Sturmey, P. (2004). The effects of behavioral skills training on staff
implementation of discrete trial teaching. Journal of Applied Behavior Analysis, 37, 535–
538.
Sarokoff, R. A., & Sturmey, P. (2008). The effects of instructions, rehearsal, modeling, and
feedback on acquisition and generalization of staff use of discrete trial teaching and
student correct responses. Research in Autism Spectrum Disorders, 2, 125-136.
Shore, B. A., Iwata, B. A., Vollmer, T. R., Lerman, D. C., & Zarcone, J. R. (1995). Pyramidal
staff training in the extension of treatment for sever behavior disorders. Journal of
Applied Behavior Analysis, 28, 323-332.
Stoutimore, M. R., Williams, C. E., Neff, B., & Foster, M. (2008). The Florida child welfare
behavior analysis services program. Research on Social Work Practice, 18, 367-376.
Van Camp, C. M., Vollmer, T. R., Goh, H., Whitehouse, C. M., Reyes, J., Montgomery, J. L., &
Borrero, J. C. (2008). Behavioral parent training in child welfare: Evaluations of skills
acquisition. Research on Social Work Practice, 18, 377-391.
Van Den Pol, R. A., Reid, D. H., & Fuqua, R. W. (1983). Peer training of safety-related skills to
institution staff: Benefits for trainers and trainees. Journal of Applied Behavior Analysis,
16, 139-156.