Effects of Computer-Based Training on ... - Wiley Online Library

2 downloads 0 Views 222KB Size Report
... relevant antecedents and consequences in. LAUREN K. SCHNELL et al. .... Zarcone, Smith & Mazaleski, 1993a). Module. 2 also discussed the similarities and ...
JOURNAL OF APPLIED BEHAVIOR ANALYSIS

2017, 9999, n/a–n/a

NUMBER

9999 ()

EFFECTS OF COMPUTER-BASED TRAINING ON PROCEDURAL MODIFICATIONS TO STANDARD FUNCTIONAL ANALYSES LAUREN K. SCHNELL, TINA M. SIDENER, RUTH M. DEBAR AND JASON C. VLADESCU CALDWELL UNIVERSITY

AND

SUNGWOO KAHNG THE UNIVERSITY OF MISSOURI

Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to training materials using interactive software during a 1-day session. Following the training, mean scores on the posttest, novel cases probe, and maintenance probe increased for all participants. These results replicate previous findings during a 1-day session and include a measure of participant acceptability of the training. Recommendations for future research on computer-based training and functional analysis are discussed. Key words: computer-based instruction, functional analysis

undifferentiated results, safety concerns, time constraints, and specialized training required to conduct a functional analysis (Hagopian, Rooker, Jessel & DeLeon, 2013; Hanley et al., 2003). Hagopian et al. (2013) examined procedural modifications made in response to undifferentiated data during functional analyses. In a review of 176 functional analysis cases, the authors found that experimental design manipulations following undifferentiated standard, multielement functional analyses (i.e., Iwata et al., 1982/94) were the most effective types of initial procedural modifications, resulting in subsequent differentiation in approximately 85% of cases (33 out of 39 functional analyses). Transitioning from a multielement design to a pairwise design (Iwata, Duncan, Zarcone, Lerman, & Shore, 1994) resulted in identification of function in 93% of cases. Implementing an extended alone condition (Vollmer, Iwata, Duncan, & Lerman, 1993b; Vollmer, Marcus, Ringdahl, & Roane, 1995)

The functional analysis methodology described by Iwata, Dorsey, Slifer, Bauman, and Richman (1982/1994) has long been identified as one of the most accurate ways of identifying the function of problem behavior (Hanley, Iwata, & McCord, 2003). These procedures involve the manipulation of antecedents and consequences to identify the maintaining variables of problem behavior and are often used as a part of a comprehensive assessment. Modifications to the assessment have been developed to decrease the amount of time required to complete it and to address This study is based on a thesis submitted by the first author, under the supervision of the second author, to the Department of Applied Behavior Analysis at Caldwell University for the Master’s degree in Applied Behavior Analysis. Address correspondence to: Tina M. Sidener, Department of Applied Behavior Analysis, Caldwell University, 120 Bloomfield Ave., Caldwell, NJ 07006; Phone: (973) 618-3539; Fax: (973) 618-3943. E-mail [email protected] doi: 10.1002/jaba.423

© 2017 Society for the Experimental Analysis of Behavior

1

2

LAUREN K. SCHNELL et al.

was effective in identifying a function in 88% of cases. Researchers have identified several effective methods for training clinicians with limited knowledge of behavior analysis or functional analysis methodology to implement functional analyses. To date, the staff training literature on functional analysis has used behavior skills training, which typically includes instructions, modeling, rehearsal, and feedback (e.g., WardHorner & Sturmey, 2012); video modeling (e.g., Moore & Fisher, 2007); instructional DVDs (Trahan & Worsdell, 2011) and workshop delivery (e.g., Wallace, Doney, MintzResudek, & Tarbox, 2004) to train students, teachers, and residential staff with little behavior analytic knowledge to implement antecedents and consequences during conditions of a functional analysis. Although these staff training protocols have been demonstrated as effective, the training procedures can be time consuming and require the trainer to dedicate several hours of training for each staff member over a number of days (Crockett, Fleming, Doepke, & Stevens, 2007; Downs, Downs, & Rau, 2008). Computerbased training has been demonstrated to be an effective and efficient method for of staff training (e.g., Desrochers, Clemmons, Grady, & Justice, 2001; Lambert, 2003; Nosik & Williams, 2011; Randell, Hall, Bizo, & Remington, 2007; Williams & Zahed, 1996). Computer-based training involves defining the material to be learned, breaking it into components, and sequencing these components through a computer screen or Internet site (Schaab & Byham, 1985; Williams & Zahed, 1996). Training may be delivered using various texts, graphics or sounds and can include various levels of interaction, which typically require the trainee to answer questions to test for comprehension. Computer-training programs can be delivered over a 1-day period and can be disseminated across multiple staff during the same training session (Williams & Zahed, 1996; Wisma, 1991).

Little research has been conducted on training staff to analyze functional analysis data. Although correct delivery of antecedents and consequences across conditions is the primary component necessary to conduct a functional analysis, there are additional steps required for the assessment to be successfully completed (Iwata et al., 2000; Wallace et al., 2004), including interpreting functional analysis data to determine function and modifying the protocol if results are unclear. Refinements to functional analysis procedures increase the likelihood of identifying behavioral function when data are undifferentiated or it is hypothesized that the results reflect a false positive (e.g., Vollmer et al., 1995). Teaching clinicians to design functional analysis conditions, analyze functional analysis data, and modify the functional analysis if the data are inconclusive could result in a greater likelihood of effectively designing function-based interventions to treat problem behavior. To our knowledge, only one study has used computer-based training to teach interpretation of functional analysis data. Chok, Shlesinger, Studer, and Bird (2012) trained Board Certified Behavior Analysts to conduct a functional analysis, interpret the results, make variations as needed in response to undifferentiated data, and create a function-based treatment option. Participants were trained using a combination of instructions, rehearsal, and performance feedback. A portion of this training included a 30-min slide show with some material adapted from Vollmer et al. (1995). The authors noted that the multiple components in the training package may be difficult to replicate in other settings. The purpose of this study was to replicate and extend the findings of Chok et al. (2012) by evaluating the effectiveness of a brief, 1-day computer-based training package for graduate students on acquisition, generalization, and maintenance of the following skills: determining relevant antecedents and consequences in

FUNCTIONAL ANALYSIS TRAINING functional analysis conditions, identifying behavioral function via visual inspection of graphs, and choosing different experimental designs to obtain clear results in a functional analysis. Social validity of the training program was evaluated by the participants following the study.

METHOD Participants and Setting Twenty graduate students, male (1) and female (19), ages 22-53 years old, enrolled in graduate classes in an applied behavior analysis program, participated in the study. Participants received extra credit in one of their current classes for participation in the study. This extra credit was provided contingent upon full participation but not on level of mastery. Prior to the study, participants’ academic history as well as exposure to functional analysis literature, functional analysis procedural variations, and visual presentation of functional analysis data were assessed via a questionnaire. Of the 20 participants 13 had experience with observing or conducting a functional analysis. All pretest, training, and posttest sessions were conducted in intervention rooms or classrooms at the participants’ university. These rooms were equipped with desks, chairs, and laptop and/or desktop computers. Training Materials Materials consisted of an online tutorial using Adobe Captivate (7), an interactive software program that allowed the experimenter to enrich Microsoft PowerPoint (97-2004) slides with audio, multimedia, interactivity and quizzes (see Supporting Information). The participants completed the online tutorial on individual desktop or laptop computers while wearing headphones. Up to four participants were in one room but participants could not view each other’s screens.

3

The tutorial included four training modules, each with 10 to 15 slides and embedded comprehension checks and quizzes (described below). Participants were asked to click on each slide to advance to the next. Pre- and posttests included 30 case descriptions and examples of functional analysis graphs demonstrating problem behavior in the context of brief (Northup et al., 1991), standard multielement (Iwata et al., 1982/94), extended alone (Vollmer et al., 1995), and pairwise design (Iwata et al., 1994) functional analyses. Ten additional case descriptions were included in a novel case probe. Although novel, these case descriptions were identical to those found in the pre- and posttests and included a small vignette and an exemplar functional analysis graph. Patterns of responding represented the following conclusions: attention-maintained, escape-maintained, socially mediated, undifferentiated, and automatically maintained behavior. Participants could choose to refer the hypothetical client to treatment, conduct a multielement functional analysis, conduct a pairwise functional analysis, or conduct extended alone sessions. Combinations of the functional analysis design responses, function-related responses, and decision-making responses are depicted in Supporting Information. Data markers across graph legends were counterbalanced across graphs, with each of the conditions assigned a different symbol to increase the likelihood that participants attended to the data paths. To assess the social validity of the training materials, three doctoral level Board Certified Behavior Analysts were asked to review the 40 case descriptions and corresponding functional analysis graphs (30 training cases, 10 novel cases). Respondents were asked to analyze each graph and respond to a multiplechoice question regarding the function of the behavior. They were also asked to rate their satisfaction with each visual display using a 5point Likert-type scale with five potential responses varying from ‘strongly disagree’ to

4

LAUREN K. SCHNELL et al.

‘strongly agree’. Changes to the stimuli were made prior to the study based upon respondent suggestions. Response Definition and Data Collection Data were collected on the percentage of correct responses to multiple-choice questions during the pretest, posttest, novel probes and a maintenance assessment for each participant. Every multiple-choice question had four response options; only one could be selected. Data were summarized by dividing the total number of correct responses by the number of total opportunities to respond and multiplying by 100%. Data were also collected on the number of times it took each participant to meet criterion on the training modules and the duration in minutes to complete each module. Pretest and posttest responses were electronically recorded and sent to the experimenter automatically through an email generated by the webserver. Experimental Design and Procedure The effects of training were evaluated using a pretest–posttest design within a multiple baseline design across participants. Two groups of 10 participants were matched for the multiple baseline design by number (Participants 1 and 11, 2 and 12, 3 and 13, 4 and 14, etc.). All participants completed the pretest, training, and posttest in one day. Pretest. Participants in the first group (1-10) were asked to respond to 70 multiple-choice questions using the online software program. Ten of these questions asked about reinforcement contingencies during attention, demand, alone, and play conditions of a functional analysis and were similar to those of Iwata et al. (2000). Participants could select from four potential responses and had an unlimited amount of time to respond. Participants were also presented with 30 case descriptions (Flesch–Kincaid Grade 12) with corresponding

functional analysis graphs and asked to respond to two multiple-choice questions per case (for a total of 70 questions). Each case description briefly presented client information, target behavior, and a functional analysis graph (see Supplemental Information). The first question asked participants to identify the function of behavior based on visual inspection of the data by selecting ‘attention’, ‘escape’, or ‘undifferentiated’. There was no option to select ‘automatic reinforcement’ because undifferentiated data paths may be a result of an automatic reinforcement function or lack of discrimination among conditions, and the focus of this training was to modify procedures until a clear function emerged. The second question asked participants to determine the next step in the functional analysis process. Participants could select from five responses: ‘brief functional analysis’, ‘multielement functional analysis’, ‘extended alone condition’, ‘pairwise analysis’, or ‘refer client to treatment’. When the graph depicted an extended alone condition, participants could choose from two responses identifying the function shown: social or automatic reinforcement. No feedback was provided. The pretest procedure for the second group of participants (11-20) was identical except that a second version of the pretest was administered immediately following the first. The questions in the second pretest were the same as those in the first pretest, but the questions and answers were presented in a randomized order. Training. Training occurred immediately following the pretest. The training tutorial began with an instruction slide and provided an overview of the procedure, including criterion for advancement to the next module in the sequence. Each module had one comprehension check embedded in the slides that asked a question related to a functional analysis graph exemplar. A voice-over presented the slide content when the module page opened and paused for 5 s to provide an opportunity for the participant to respond to multiple-choice options.

FUNCTIONAL ANALYSIS TRAINING The computer program provided feedback by highlighting the correct response on the screen. If the participant selected an incorrect answer during the comprehension check, the correct answer was then immediately highlighted and the voice-over read the response aloud. The computer tutorial was divided into four modules that each consisted of 10-12 slides. Module 1 included an overview of the antecedents and consequences associated with each functional analysis condition as described by Iwata et al. (2000). Module 2 included an overview of functional analysis literature, including brief functional analysis (Northup et al., 1991), multielement or standard functional analysis (Iwata et al., 1982/94), extended alone condition (Vollmer et al., 1995), and pairwise analysis (Iwata et al., 1994; Vollmer, Iwata, Zarcone, Smith & Mazaleski, 1993a). Module 2 also discussed the similarities and differences among all variations and the reasons that each variation may be required (Vollmer et al., 1995). Module 3 provided an overview of visual analysis and graph interpretation, including analyzing a functional analysis graph to determine function (Hagopian, Fisher, Thompson, & Owen-DeSchryver, 1997) and whether data are differentiated or undifferentiated. Module 4 provided an overview of the decision making protocol (see Supporting Information). This module guided participants on whether to refer the client to treatment and what, if any, functional analysis variation should be implemented following undifferentiated data paths (Chok et al., 2012; Vollmer et al., 1995). For example, if the maintaining variables of the target behavior could not be identified when presented with a brief functional analysis graph (Northup et al., 1991), the module suggested transitioning to a multielement functional analysis (Iwata et al., 1982/94). If the data were undifferentiated at the conclusion of a multielement functional analysis, it was suggested to conduct an extended alone condition (Vollmer, 1995). This involved conducting repeated

5

alone sessions to assess whether the client’s behavior is maintained by automatic reinforcement or undifferentiated data are the result of failure to discriminate contingencies. If responding decreased during the extended alone condition, Module 4 suggested conducting a pairwise analysis (Iwata et al., 1994). At the end of each training module, an online quiz with 10-15 multiple-choice questions was displayed on the screen requiring the participant to answer questions about the information provided in the module. The participant selected the response from an array of four possible choices. Although these questions were different from the questions on the pre- and posttests, they were similar in format and content. Incorrect responses resulted in immediate presentation of the correct response and display of relevant information previously presented in the tutorial. Correct responses resulted in a green feedback box congratulating the participant on responding correctly. At the conclusion of the quiz for each module, participants were provided with their overall quiz score and instructed to retake the module or progress to the next module. Participants were required to score 100% correct on the quiz to progress to the next module in the sequence. If the criterion was not met, the training tutorial redirected them to the first slide in the module. Participants had the opportunity to retake the failed module and quiz an infinite number of times, with each occurrence requiring them to rewatch the failed module from the first slide. Questions and answers were always re-presented in randomized order. After passing all four of the modules, participants completed the posttest. Posttest. The posttest was identical to the pretest except the questions and multiplechoice options were presented in a different, random order. No feedback was provided. Participants who scored less than 90% correct were offered the opportunity to retake the entire training tutorial, Modules 1-4.

6

LAUREN K. SCHNELL et al.

Novel Case Probe Contingent upon completing the posttest, the participants were presented with 10 novel cases and graphs. As in the pretest and posttest, participants were asked to respond to two questions per case regarding the hypothesized function of the behavior and the next steps for assessment and 10 novel multiple-choice questions about reinforcement contingencies during functional analysis conditions. No feedback was provided. Maintenance Probe To assess maintenance of learned skills, participants were asked to repeat the posttest measure 2 weeks after training. All cases, questions and multiple-choice responses were randomized, and no feedback was provided. Social Validity Following the study, participants were asked to respond to six questions about the acceptability of the training program by selecting 1 (strongly disagree) through 5 (strongly agree) and comment on their experience in a space provided (Table 1).

Procedural Integrity and Interobserver Agreement Procedural integrity data were collected on the functioning of the computer-based tutorial software to ensure that the program opened, presented participants with the correct pretest, module, posttest, novel cases probe and maintenance test and delivered or withheld feedback. A second trained observer was present to record data on all the steps of the training protocol (e.g., participant logged in to software using username, title screen opened, slide advanced when participant clicked on screen etc.). Procedural integrity data were collected for 50% of all training sessions, and the mean was 98.9% (range, 94%-100%). On two occasions the software program froze on the current screen and did not advance without the programmer typing in a code on her computer. The software was programmed to automatically score and record participant responses; therefore, calculation of interobserver agreement was unnecessary. RESULTS Figures 1 and 2 show the percentage of total correct responses for each participant and mean

Table 1 Participant Social Validity Summary Survey Item 1. Training graduate students to modify a functional analysis using a computer tutorial was successfully achieved. 2. The content of the tutorial was at an appropriate difficulty level. 3. I can apply the content presented in the training when conducting a functional analysis that results in undifferentiated data. 4. There were enough examples for me to learn the material. 5. After this training I feel confident in making decisions related to modifying a functional analysis. 6. I enjoyed the format of this tutorial.

Distribution of ratings 1-2-3-4-5

M

0-0-0-2-17

4.9

0-0-0-2-17

4.9

0-0-0-1-18

4.9

0-0-0-2-17

4.9

0-0-0-4-15

4.8

0-0-0-5-14

4.7

Note: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree

FUNCTIONAL ANALYSIS TRAINING 100 90 80 70 60 50 40 30 20 10 0

P1

100 90 80 70 60 50 40 30 20 10 0

P2

P12

P11

100

7

P3

P4

90 80

Percentage of Correct Responses

70 60 50 40 30 20 10 0 P13

100

P14

90 80 70 60 50 40 30 20 10 0 100

P5

Pretest 1

Pretest 2

Posttest

Novel Case Maintenance Probe

90 80 70 60 50 40 30 20 10 0 P15

100 90 80 70 60 50 40 30 20 10 0 Pretest 1

Pretest 2

Posttest

Novel Case Maintenance Probe

Figure 1. The percentage of correct responses on pretest, posttest, novel cases probe and maintenance measures for Participants 1-5 and 11-15.

LAUREN K. SCHNELL et al.

8

P6

100 90 80 70 60 50 40 30 20 10 0

P16

Percentage of Correct Responses

100 90 80 70 60 50 40 30 20 10 0

P7

P17

P9

P8

100 90 80 70 60 50 40 30 20 10 0 100

P19

P18

90 80 70 60 50 40 30 20 10 0 100

Group 1

P10

90 80 70 60 50 40 30 20 10 0 100

Group 2

P20

90 80 70 60 50 40 30 20 10 0 Pretest 1

Pretest 2

Posttest

Novel Case Maintenance Probe

Pretest 1

Pretest 2

Posttest

Novel Case Probe

Maintenance

Figure 2. The percentage of correct responses on pretest, posttest, novel cases probe and maintenance measures for Participants 6-10 and 16-20. Mean correct responses for Group 1 and Group 2 are also displayed.

FUNCTIONAL ANALYSIS TRAINING percentage of correct responses for all participants, respectively, during each condition. All participants (1-20) increased their posttest, novel case probe and maintenance scores after exposure to the computer based training tutorial. The mean pretest score across participants in group 1 (participants 1-10) was 68% (range, 43% - 83%). Following computer-based training, mean scores improved for all 10 participants during the posttest, with an overall mean of 94% (range, 75% - 100%). The mean score across participants 1-10 during the novel cases probe was 95% (range, 77% - 100%), and during the maintenance probe (participants 1-9) 2 weeks after the computer-based training tutorial, the mean was 96% (range, 82% - 100%). Participant 10 did not meet the 90% passing criterion on the posttest; her score on the posttest was 75% correct, and her performance on the novel case probe was 77% correct. Participant 10 did not retake the tutorial prior to completing the novel case probe due to time constraints. The mean pretest score for participants in group 2 (participants 11-20) was 62% (range, 43% - 76%) for pretest 1 and 62% for pretest 2 (range, 41% - 79%). Following computer-based training, mean scores improved for all 10 of these participants during the posttest, with an overall mean of 93% (range, 90% - 100%). Mean scores across the 10 participants in group 2, during the novel case probe was 98% (range, 90% - 100%), and during the maintenance probe 2 weeks after the computerbased training tutorial, the mean was 90% (range, 82% - 93%). The mean number of module attempts to criterion ranged from 1.3 to 2.3. The mean total training time to complete the four modules collectively ranged from 9 to 45 min. For Module 1 (Functional Analysis Conditions), mean attempts to criterion was 2 (range = 1 to 5) with a mean training time of 28 min (range, 10 to 110). These data reflect the highest number of module attempts to criterion across the four modules and the longest duration of time

9

to completion. Module 2 (Overview of Data Analysis and Graph Interpretation) was completed with a mean duration of 18 min to completion (range, 4 to 47). Module 3 (Overview of Functional Analysis Variations) required a mean of 1.3 attempts (range, 1 to 2) to criterion with a mean duration of 20 min to completion (range, 9 to 29). These data reflect the least number of module attempts to criterion across the four modules although not the shortest duration of time to completion. Module 4 (Overview of the Decision Making Process) required a mean of 1.5 attempts (range, 1 to 2) to criterion with a mean duration of 16 min to completion (range, 6 to 28). Table 1 shows the results of the participant satisfaction survey used to assess social validity across 19 out of the 20 participants. All participants rated the social validity questions by agreeing to strongly agreeing that the procedures were successful in training participants, were at an appropriate difficulty level, are applicable to real world scenarios and were enjoyable. DISCUSSION The purpose of this study was to evaluate the effectiveness of a brief, 1-day computerbased training package on acquisition, generalization, and maintenance of the following skills of graduate students: identifying relevant antecedents and consequences in functional analysis conditions, identifying behavioral function via visual inspection of graphs, and making procedural variations to obtain clear results in a functional analysis. The results showed that performance improved from baseline and maintained for 19 of 20 participants following experience with the computer-based training. This study extends the work of Chok et al. (2012) by demonstrating the efficacy of a stand-alone, computer-based tutorial that was administered from start to finish within a 1-day period to a large number of trainees concurrently without

10

LAUREN K. SCHNELL et al.

the presence of a staff trainer. These findings also replicate the research on training participants to identify relevant antecedent and consequences in conditions of a standard functional analysis (e.g., Iwata et al., 2000; Lambert, Bloom, Kunnavatana, Collins, & Clay, 2013; Moore et al., 2002; Phillips & Mudford, 2008; Wallace et al., 2004) and shows computerbased training as an option for training participant skills. As there continues to be rapid growth in online learning across higher education institutions, one concern is how to maximize student learning by providing quality online education (Lee, 2014). In fact, it is projected that at least 80% of graduate students will have taken an online class by the end of 2014, and 65% of US institutions offer online options to students (Allen & Seaman, 2005). The format and delivery of the training evaluated in the current study, in combination with its effectiveness, makes it a viable replacement or supplement to traditional lecture. The training package also effectively taught participants to respond to novel cases and interpret graphically depicted data. However, there was no evaluation with actual clients in clinical settings. Future research should examine generalization of trained skills with clients in relevant settings. Additionally, a maintenance probe was administered 2 weeks posttraining, and 19 of 20 participants who completed the maintenance measure scored at least 80% correct. Although it is possible that participants “studied” for the maintenance probe, the consistent results across participants provide support for the effects of the training package on retention. Future research may want to evaluate retention of skills over periods of time longer than 2 weeks. The social validity outcomes of the present study suggest that participants were highly satisfied with the tutorial content and the mode in which instruction was delivered. One limitation is that Participant 10 did not complete the

social validity survey due to stated time constraints. This information would have been valuable as including data from a participant who did not meet mastery criteria may have been a more representative sample of the social validity of the teaching procedures. Another limitation with the current study was that it was not directly compared to another method of instruction in terms of efficiency, effectiveness, or cost. Although the present study showed that participants completed the course training modules in an average of 1.9 hr, it is unclear what results could have been obtained within a similar duration of time using a traditional college lecture or other method of self-instruction. The tutorial was created in 29 hr over 4 months, and a consultant was hired for $20 per hr to assist with inputting the training material into the computer software program. The consultant provided access to the computer software at no cost to the experimenter. The experimenter and consultant met on a bi-weekly basis for 1-2 hr to review progress of the project and make edits to the program. Although this was a lengthy process, creating this type of tutorial may be more efficient and cost-effective over time than other methods of instruction if it is used with many trainees. Future studies might directly compare time, effectiveness, and cost measures of different types of instruction to provide data for conclusions about efficiency and costeffectiveness. Because all questions on pre- and posttests were delivered in a multiple-choice format, participant responses to the questions were limited to the given choices. In addition, participants were provided with one correct answer to select, whereas in clinical practice this is typically not the case. Future research might use this type of computer-based training in combination with open-ended questions such as, “what would you do next when presented with these data?” Providing multiple answers to questions regarding the function of behavior

FUNCTIONAL ANALYSIS TRAINING should also be investigated, as this would represent more naturalistic occurrences in clinical practice. Despite the limitations noted above, the results of this study confirm the efficacy of this computer-based tutorial in teaching graduate students to analyze functional analysis data and recognize necessary procedural variations when undifferentiated data are obtained. Participants entered the study with limited repertoires regarding the targeted concepts as noted across pretest scores. Following completion of the training, participants exited the study with a mean significant increase in their posttest scores. These results, along with high scores on the social validity assessment, suggest that computer-based training is a viable option for training graduate students on some aspects of the functional assessment process. REFERENCES Allen, I., & Seaman, J. (2005). Growing by degrees: Online education in the United States, 2005. Retrieved from http://www.onlinelearningsurvey. com/reports/growing-by-degrees.pdf Chok, J. T., Shlesinger, A., Studer, L., & Bird, F. L. (2012). Description of a practitioner training program on functional analysis and treatment development. Behavior Analysis in Practice, 5, 25–36. Crockett, J. L., Fleming, R. K., Doepke, K. J., & Stevens, J. S. (2007). Parent training: Acquisition and generalization of discrete trial teaching skills with parents of children with autism. Research in Developmental Disabilities, 28, 23–36. https://doi.org/10. 1016/j.ridd.2005.10.003 Desrochers, M. N., Clemmons, T., Grady, M., & Justice, B. (2001). An evaluation of simulations in developmental disabilities (SIDD): Instructional software that provides practice in behavioral assessment and treatment decisions. Journal of Technology in Human Services, 17, 15–27. Downs, A., Downs, R.C., & Rau, K. (2008). Effects of training and feedback on discrete trial teaching skills and student performance. Research in Developmental Disabilities, 29, 235–246. https://doi.org/10.1016/j. ridd.2007.05.001 Hagopian, L. P., Fisher, W. W., Thompson, R. H., & Owen-DeSchryver, J. (1997). Toward the development of structured criteria for interpretation of functional analysis data. Journal of Applied Behavior

11

Analysis, 30, 313–326. https://doi.org/10.1901/jaba. 1997.30-313 Hagopian, L. P., Rooker, G. W., Jessel, J., & DeLeon, I. G. (2013). Initial functional analysis outcomes and modifications in pursuit of differentiation: A summary of 176 inpatient cases. Journal of Applied Behavior Analysis, 46, 88–100. https://doi.org/10. 1002/jaba.25 Hanley, G. P., Iwata, G. P., & McCord, B. E. (2003). Functional analysis of problem behavior: A review. Journal of Applied Behavior Analysis, 36, 147–185. https://doi.org/10.1901/jaba.2003.36-147 Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1982/1994). Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis, 27, 197–209. https://doi. org/10.1901/jaba.1994.27-197 Iwata, B. A., Duncan, B. A., Zarcone, J. R., Lerman, D. C., & Shore, B. A. (1994). A sequential, test-control methodology for conducting functional analyses of self-injurious behavior. Behavior Modification, 18, 289–306. https://doi.org/10.1177/ 01454455940183003 Iwata, B. A., Wallace, M. D., Kahng, S. W., Lindberg, J. S., Roscoe, E. M., Conners, J., … Worsdell, A. S. (2000). Skill acquisition in the implementation of functional analysis methodology. Journal of Applied BehaviorAnalysis, 33, 181–194. https:// doi.org/10.1901/jaba.2000.33-181 Lambert, M. E. (2003). Using computer simulations in behavior therapy training. Computers in Human Services, 5, 1–12. https://doi.org/10.1300/ J407v05n03_01 Lambert, J. M., Bloom, S. E., Kunnavatana, S. S., Collins, S. D., & Clay, C. J. (2013). Training residential staff to conduct trial-based functional analyses. Journal of Applied Behavior Analysis, 46, 296–300. https://doi.org/10.1002/jaba.17 Lee, J. (2014). An exploratory study of effective online learning: Assessing satisfaction levels of graduate students’ mathematics education associated with human and design factors of an online course. International Review of Research in Open and Distance Learning, 15, 111–132. https://doi.org/10.19173/irrodl.v15i1.1638 Moore, J. W., Edwards, R. P., Sterling-Turner, H. E., Riley J., DuBard, M., & McGeorge, A. (2002). Teacher acquisition of functional analysis methodology. Journal of Applied Behavior Analysis, 35, 73–77. https://doi.org/10.1901/jaba.2002.35-73 Moore, J. W., & Fisher, W. (2007). The effects of videotape modeling on staff acquisition of functional analysis methodology. Journal of Applied Behavior Analysis, 40, 197–202. https://doi.org/10.1901/jaba. 2007.24-06 Northup, J., Wacker, D., Sasso, G., Steege, M., Cigrand, K., Cook, J., & DeRaad, A. (1991). A brief functional analysis of aggressive and alternative behavior in an outclinic setting. Journal of Applied Behavior

12

LAUREN K. SCHNELL et al.

Analysis, 24, 509–522. https://doi.org/10.1901/jaba. 1991.24-509 Nosik, M. R., & Williams, W. L. (2011). Component evaluation of a computer based format for teaching discrete trial and backward chaining. Research in Developmental Disabilities, 32, 1694–1702. https:// doi.org/10.1016/j.ridd.2011.02.022 Phillips, K. J., & Mudford, O. C. (2008). Functional analysis skills training for residential caregivers. Behavioral Interventions, 23, 1–12. https://doi.org/10. 1002/bin.252 Randall, T., Hall, M., Bizo, L, & Remington, B. (2007). DTkid: Interactive simulation software for training tutors of children with autism. Journal of Autism and Developmental Disorders, 37, 637–647. https://doi. org/10.1007/s10803-006-0193-z Schaab, N., & Byham, W. (1985). Effectiveness of computer-based training/interactive video for development of interactive skills – A breakthrough in technology. Pittsburgh, PA: Development Dimensions International. Trahan, M. A., & Worsdell, A. S. (2011). Effectiveness of an instructional DVD in training college students to implement functional analyses. Behavioral Interventions, 26, 85–102. https://doi.org/10.1002/bin.324 Vollmer, T. R., Iwata, B., Duncan, B. A., & Lerman, D. C. (1993b). Extensions of multielement functional analyses using reversal-type designs. Journal of Developmental and Physical Disabilities, 5, 311– 325. https://doi.org/10.1007/BF01046388 Vollmer, T. R., Iwata, B., Zarcone, J. R., Smith, R. G., & Mazaleski, J. L. (1993a). Within-

session patterns of self-injury as indicators of behavioral function. Research in Developmental Disabilities, 14, 479–492. https://doi.org/10.1016/0891-4222 (93)90039-M Vollmer, T. R., Marcus, B. A., Ringdahl, J. E., & Roane, H. S. (1995). Progressing from brief assessments to extended experimental analyses in the evaluation of aberrant behavior. Journal of Applied Behavior Analysis, 28, 561–576. https://doi.org/10. 1901/jaba.1995.28-561 Wallace, M. D., Doney, J. K., Mintz-Resudek, C. M., & Tarbox, R. F. (2004). Training educators to implement functional analyses. Journal of Applied Behavior Analysis, 37, 89–92. https://doi.org/10.1901/jaba. 2004.37-89 Ward-Horner, J., & Sturmey, P. (2012). Component analysis of behavior skills training in a functional analysis. Behavioral Interventions, 27, 75–92. https:// doi.org/10.1002/bin.1339 Williams, T. C., & Zahed, H. (1996). Computer-based training versus traditional lecture: Effect on learning and retention. Journal of Business and Psychology, 11, 297–310. https://doi.org/10.1007/BF02193865 Wisma, T. A. (1991). Computers and operations management. Occupational Health and Safety, 60, 30–36.

Received February 24, 2016 Final acceptance September 13, 2016 Action Editor, Michele Wallace