BEHAVIORAL DYSCONTROL SCALE-ELECTRONIC ...

12 downloads 0 Views 334KB Size Report
Controlled oral word association test (COWA). (Spreen & Strauss, 1991). Standard procedures outlined in the manual were followed. Participants were asked.
The Clinical Neuropsychologist, 19: 4–26, 2005 Copyright # Taylor and Francis Ltd. ISSN: 1385-4046 DOI: 10.1080/13854040490888585

BEHAVIORAL DYSCONTROL SCALE-ELECTRONIC VERSION: FIRST EXAMINATION OF RELIABILITY, VALIDITY, AND INCREMENTAL UTILITY Yana Suchy, Christina Derbidge, and Christy Cope University of Utah, Salt Lake City, UT, USA Behavioral Dyscontrol Scale (BDS) is a clinical measure previously shown to be related to frontal lobe integrity, executive abilities, and functional independence. Electronic version of the scale (BDS-EV) was developed and its reliability and validity were examined. The BDS-EV, the original BDS, and a brief battery of traditional clinical tests were administered to 55 community-dwelling adults ages 18 to 68. The results yielded high internal consistency and provided support for convergent, discriminant, and incremental validity. Overall, the results demonstrate the feasibility of converting the BDS into an electronic instrument and support continued research and development of this instrument. Keywords: Executive; Behavioral Dyscontrol Scale; Electronic Tests; Age Effects; Neuropsychological Assessment

The Behavioral Dyscontrol Scale (Grigsby & Kaye, 1996) is a brief multidimensional measure originally developed for the assessment of geriatric patients’ capacity for independent functioning. The instrument has been shown to have excellent reliability (Grigsby, Kaye, & Robbins, 1992), and validation studies to date have been equally impressive. Specifically, studies conducted with geriatric patient have demonstrated the measure’s relationship to executive abilities (Suchy, Blint, & Osmon, 1997) and have shown the BDS to be a strong predictor of functional independence (Grigsby, Kaye, Baxter, Shetterly, & Hamman, 1998; Grigsby, Kaye, Eilertsen, & Kramer, 2000; Grigsby, Kaye, Kowalsky, & Kramer, 2002a, 2002b; Kaye, Grigsby, Robbins, & Korzun, 1990; Suchy et al., 1997). The BDS consists of 9 tasks of working memory, motor learning and motor programming, and behavioral inhibition. More detailed description of the BDS can be found in the BDS manual (Grigsby & Kaye, 1996). Exploratory factor analysis of the BDS items generated three factors (Grigsby et al., 1992), which were recently validated with a confirmatory factor analysis (Ecklund-Johnson, Miller, & Sweet, 2004). The BDS factors have been shown to be related to other traditional measures of executive functioning (Suchy et al., 1997), and can be theoretically linked to different components of executive processing. Specifically, Motor Address correspondence to: Yana Suchy, Ph.D., Assistant Professor, University of Utah, Department of Psychology, 380 S. 1530 E., Rm. 502, Salt Lake City, UT 84112-0251, USA. Tel.: þ1-801-585-0796. Fax: þ1-801-581-5841. E-mail: [email protected] Accepted for publication: September 7, 2004.

4

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

5

Programming (MP) factor relies in part on motivation, follow-through, and attention; the Environmental Independence (EI) factor reflects the capacity for inhibition; and the Fluid Intelligence (FI) factor is associated with working memory abilities. These factors have also been differentially related to functional independence among the elderly (Suchy et al., 1997) and to aggressive acting out among psychogeriatric inpatients (Suchy & Bolger, 1999). Although the BDS was originally developed for work with the elderly, it has shown promise with nongeriatric patients as well. In particular, the BDS and its factors were successful at differentiating non-geriatric TBI patients based on lesion location and severity of injury (Leahy, Suchy, Sweet, & Lam, 2003) and contributed above and beyond traditional measures of executive functioning to TBI patient classification (Suchy, Leahy, Sweet, & Lam, 2003). Despite these strengths, the BDS has been plagued by several weaknesses. First, the BDS has a somewhat limited range of score, which makes it susceptible to ceiling and floor effects. This is particularly true for the EI factor, which only consists of two items (Leahy et al., 2003). Second, the BDS is somewhat more difficult to administer and score than is typical for neuropsychological tests, which limits its attractiveness to clinicians. Finally, the BDS scoring system relies heavily on examiner judgment, and as such is susceptible to examiner drift. Although interrater reliability among welltrained examiners is excellent (Grigsby et al., 1992), reliability among independent clinicians or different laboratories is not known. In order to alleviate the limitations of the original BDS (BDS-O), we set out to develop a computer-administered electronic version of the BDS (BDS-EV) that would decrease the need for examiner training and eliminate subjective scoring, while affording a wider range of scores. The steps taken in the development of this instrument are described below. Task Development There were several characteristics of the BDS-O that we aimed to retain in our electronic version. First, the authors of the BDS-O (Grigsby & Kaye, 1996) made it clear that they aimed to diminish the extent to which BDS performance relies on nonexecutive processes, such as language and memory. For this reason, BDS-O administration requires that the examiner take extra care to ensure that the patient understands the instructions prior to test performance. We followed this same format, providing both written instructions on the computer screen, and requiring that an examiner provide additional explanation and practice as needed. For the same reason, most tasks are preceded by practice trials, and participants are given the opportunity to repeat practice if needed. The second characteristic of the BDS-O that we wanted to retain was its ability to tap into three different components of executive processing: (a) Working memory, (b) capacity for inhibition, and (c) set maintenance=response selection. Although we aimed to develop items that were as parallel to the BDS-O items as possible, the more important goal was to develop items that were theoretically consistent with assessment of the above processes. In addition to retaining certain characteristics of the BDS-O, we also strived to improve some of its shortcomings. First, we wanted to improve the BDS-O by

6

YANA SUCHY ET AL.

providing a sufficient number of variables that would comprise each composite score, so as to have an adequate sampling of performance. To that end, we developed seven tasks that together generated 15 variables, representing an almost twofold increase form the 9 items on the BDS-O. Second, in order to make the instrument more useful with a variety of populations and to avoid floor and ceiling effects, we included tasks of variable difficulty, from very easy to quite challenging. Third, we wanted to improve the BDS-O by providing a way of equating participants on pure motor and processing speed. This would allow us to asses more purely executive processes, i.e., higher order processes above and beyond speed. We accomplished this by correcting participants’ performance on certain tasks for their own ‘‘baseline’’ performances. In particular, the battery itself is preceded by two baseline tasks: finger tapping and simple choice reaction time. Performance on these tasks is used in some cases to compute corrected variables, and in some cases to determine the timing of certain components of stimuli presentation. In addition, we used performances on ‘‘easy’’ (or executively nondemanding) components of certain tasks as baselines against which we compared performances on the more ‘‘difficult’’ (or executively more demanding) task components. Detailed description of the BDS-EV items can be found in the Methods section. Hardware Development In order for this instrument to be successful, we felt that a specialized response console was needed. There were two main characteristics that such a console needed to have: First, we wanted this instrument to be useful with a variety of clinical populations, including the elderly and other individuals who may have limited or no experience with computers and who would feel uncomfortable or overwhelmed when faced with the complexity of a regular keyboard. For this reason, we used large response keys that were clearly labeled in accordance with the response demands. In order to avoid confusion, each key was assigned a very specific response function (e.g., a ‘‘yes’’ key, a ‘‘no’’ key, and keys that corresponded to specific colors, numbers, or letters). No key served more than one purpose. Second, because we wanted the test to be as parallel to the original test as possible, we needed a response console that would allow assessment of a variety of hand movements for assessment of motor learning. For that reason, we equipped the console with (a) a large dome to be used for tapping with the palm of one’s hand or fingertips, and (b) a joystick that could be moved in all directions and turned either clockwise or counterclockwise. We purposely selected a joystick that allowed all these movements, so that disorganized movements on the part of the participants would be possible and would be scored as errors. Thus, for example, if a participant turned the joystick counterclockwise instead of clockwise, an error was scored. Because the ultimate goal of this project was to develop an instrument that would be useful in clinical settings, we set out to make this instrument reasonably cost effective. Thus, while touch screens have become popular due to their very

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

7

Figure 1 Response console for the BDS-EV.

professional look and ease of use, they are very expensive. To avoid high cost, we used a metal encasing with sturdy arcade-style response buttons. This made the cost of the response console close to tenfold less than a touch screen would have been. Finally, we designed the response console in such a way so as to allow both right-handed and left-handed responding. A photograph of the response console can be found in Figure 1. The purpose of the present study was to provide preliminary examination of the BDS-EV in terms of its (a) internal consistency reliability and (b) convergent, discriminant, and incremental validity. To this end, we administered the BDS-EV, the BDS-O, and a brief battery of traditional neuropsychological measures to a group of 55 community-dwelling participants ranging in age from 18 to 68 years. We expected that the BDS-EV and the BDS-O would correlate highly with each other, as well as correlate with other measures of executive functioning. Further, we expected that neither measure would show substantial correlations with a measure of crystallized intelligence (WAIS-III information subtest) or measures of processing speed and cognitive efficiency. Finally, we examined whether the BDS-EV would contribute to participant classification above and beyond the BDS-O and other measures of executive processing.

METHOD Participants Participants were 55 healthy community-dwelling adults, drawn from three age groups: ‘‘college-age,’’ ‘‘middle-age,’’ and ‘‘older.’’ Demographic characteristics of the sample can be found in Table 1. The groups did not differ in education or distribution of gender or left-handedness.

8

YANA SUCHY ET AL.

Table 1 Demographic characteristics of the sample Age Group College (n ¼ 20)

Middle-Aged (n ¼ 16)

Older (n ¼ 19)

Age (years) Mean SD Range

20.60 2.64 18–28

45.44 3.10 42–50

62.53 2.82 58–68

Education (years) Mean SD Range

14.25 2.17 12–21

14.97 2.10 12–18

15.58 2.10 13–21

% Female

55

31

58

% Left-handed

20

6

21

Instruments Baseline tasks Finger tapping (FT). The present battery was preceded by an electronic finger tapping task administered on the BDS-EV response console. The task consisted of ten 10-second tapping trials, five with each hand, alternating between dominant and nondominant hand. Participants rested their arm and hand on the response console and tapped on a centrally located key. They received electronic auditory signals to start and stop tapping. Choice reaction time task (CRT). Participants were presented with a series of 30 red, blue, and orange circles. They were asked to press the button on the response console that was labeled with the color of the circles. Participants were encouraged to respond as fast as they could upon seeing the circle. Stimuli were presented randomly without replacement. Immediately following a response, a ‘‘Get ready’’ prompt appeared on the screen. Prompt-stimulus interval was 500 ms. BDS-EV tasks Please note that detailed description of the variables generated by the tasks described below can be found in the subsequent section entitled ‘‘Variables and Score computations.’’ Push-turn-taptap (PTT). This task was developed as a parallel to four tasks that comprise the MP factor of the BDS-O. Using a single task as a substitute for four tasks was possible because of the variety of motor learning and motor programming abilities that the PTT task assesses, including the ability to learn progressively more difficult hand sequences, the ability to engage in smooth tapping sequences, and the ability to perform tasks effortlessly, smoothly, and rapidly without mistakes. Participants were required to learn four different sequences of three different hand movements. The three hand movements were as follows: (a) ‘‘Push’’ — pushing the joystick on the response console directly away from the participant, (b) ‘‘Turn’’— turning the joystick clockwise, and (c) ‘‘Taptap’’— double-tapping on

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

9

the white dome on the response console. Participants started with a block in which they learned a two-movement sequence, followed by blocks of three-, four-, and five-movement sequences. Each block began with a presentation of the sequence on the computer screen. After three correct sequences were accomplished, the screen presentation disappeared and participants were expected to perform the sequences from memory. Participants were aware at the outset that the screen presentation would disappear after three correct trials. Mistakes were indicated to participants by a ding and a presentation of the correct sequence on the screen, with the movement on which a mistake occurred being highlighted. Participants were asked to produce the highlighted movement (i.e., the movement on which they had made a mistake), and then to proceed with the remainder of the sequence. Each block continued until five correct hand sequences without screen presentation were completed. However, the task was terminated if the criterion of five correct sequences was not achieved in 10 trials. Because motor learning on the BDS-O is assessed by errors, smoothness of performance, time to learn, and presence of perseverative tapping errors, the data collected with the current task allows evaluation of all of these characteristics. Ding-tap (DT). This task was developed as a direct parallel to an item on the BDS-O in which participants are asked to tap the opposite number of times than the examiner. In the present task, participants listened to a series of dings. Following each ding (or set of two dings), they were expected to tap on the white dome. They were expected to tap twice in response to a single ding, and once in response to two dings. There are 20 trials, preceded by three practice trials. Mirroring errors (producing the same number of taps as there were dings), perseverative errors (three or more taps), and response latencies were recorded. Complex go=No-go (CGNG). This task was developed as a parallel to a go=no-go task on the BDS-O. However, because the EI factor on the BDS-O has been plagued by ceiling effects, the present task was made more difficult. In particular, orange, green, and blue circles and squares appeared on the screen one at a time. Participants were expected to respond to the color of squares (i.e., press a key that corresponds in color), and to not respond in any way to circles. Because research with certain populations characterized by impulsivity has shown that providing rewards may actually lead to poorer performance (Newman & Kosson, 1986), feedback designed to encourage fast responding (and thus potentially encouraging commission errors) was provided on the screen. Speed for which participants were rewarded was based on participants’ own performance on the CRT task performed earlier. Feedback was provided as follows: (a) Slower than 500 ms above their MCRT: ‘‘Too slow’’; (b) Faster than 100 ms above their own median choice reaction time (MCRT): ‘‘Nice speed’’; (c) Faster than MCRT: ‘‘Great speed’’; and (d) faster than 100 ms below their own MCRT: ‘‘Fantastic speed.’’ There were 18 trials, preceded by 6 practice trials. Stimuli stayed on the screen for 3000 ms, or until a participant responded. The total number of errors was recorded.

10

YANA SUCHY ET AL.

Reversal learning (RT). Reversal learning tasks are a type of a go=no-go task. On the present task, participants were again presented with a series of colored squares and circles. They were asked to determine which particular characteristic (e.g., a particular shape, or a particular color) they should respond to, and which particular characteristics they should ignore. For example, participants needed to learn to respond to all green figures, and to not respond in any way to figures of other colors. Responses were accomplished by tapping on the white dome on the response console. Feedback was provided on each trial. Once a participant completed 10 correct trials, the response principle was reversed (e.g., if ‘‘green’’ was the correct principle initially, then ‘‘red’’ would become the correct principle next). In this manner, the participants needed to learn to respond to the very characteristic which they were asked to ignore previously. The task continued until four blocks of ten correct trials were completed, or until 30 incorrect trials on a given block occurred. The total number of incorrect responses was recorded. Alphanumeric sequencing (AS). This task was developed as a parallel to the oral alphanumeric sequencing task on the BDS. As such, the task taxes working memory more so than the classic trail-making test does, in that no trail is drawn and thus participants always have to hold in mind not only where they are going, but also where they have been. In addition, however, this task taxes working memory more so than the simple oral alphanumeric sequencing used in the BDS-O, in that there is the additional demand to search among an array of keys for the locations of the upcoming symbols. In particular, participants were presented with a random array of keys containing numbers ‘‘1’’ through ‘‘12’’ and letters ‘‘A’’ through ‘‘L’’. They were required to press keys in an alternating sequence (1A2B, etc.). When a mistake was made, participants heard a ding, which was accompanied by the presentation of the correct symbol on the computer screen. Participants were instructed to press that symbol and to continue with the sequence. Timing started when the number 1 (i.e., the first symbol in the sequence) was pressed, and ended when the last symbol was pressed. The following aspects of performance were recorded: (a) Total amount of time to complete the task; (b) number of perseverative errors (i.e., the number of times a participant failed to alternate); (c) the number of working memory errors (i.e., the number of times a person attempted to alternate, but made a mistake, such as pressing the letter F instead of the letter E); and (d) the latency between individual responses. 2-Back (2-B). We included this task as a purer measure of working memory that does not rely on motor speed or visual-spatial scanning. This task required that participants watch a series of words appear on the computer screen one at a time. Each word remained on the screen until participants responded, or for up to 2000 ms. Response-stimulus interval was 500 ms. The object was to try to remember what stimulus was presented two trials back. Participants pressed the ‘‘yes’’ key on the response console if they believed that the current stimulus was presented two trials back. Otherwise, they pressed the ‘‘no’’ key. So, for example, if the following sequence was presented: DOG BIKE CAT BIKE, participants should respond ‘‘no’’ to all stimuli except the second ‘‘bike,’’ since that same stimulus appeared two trials back. There were 45 stimuli, preceded by 9 practice trials. The total number of errors was recorded.

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

11

Accuracy estimate (AE). As a parallel to the ‘‘insight’’ item on the BDS-O, the PTT task described above was followed by a query about participants’ impressions of their own performance accuracy. Specifically, participants were asked to estimate how many mistakes they had made on the PTT task. Measures of convergent validity Behavioral dyscontrol scale—The original version (BDS-O). (Grigsby & Kaye, 1996). Standard procedures outlined in the manual were followed. The BDS is comprised of nine items: Two tasks requiring learning of novel hand movements, two requiring alternation between single and double tapping, two go=no-go tasks, one alphanumeric sequencing, one task requiring copying of hand movements without mirroring, and an estimate of accuracy of own performance. The test takes about 10 minutes to administer. Controlled oral word association test (COWA). (Spreen & Strauss, 1991). Standard procedures outlined in the manual were followed. Participants were asked to generate words that begin with a given letter. The total number of correct words generated across three trials was used as a variable in the analyses. Although research has demonstrated that performance on this task goes beyond aphasic syndromes and may result from problems of initiation and regulation associated with frontal lobe damage (Miller, 1984; Pendleton, Heaton, Lehman, & Hulihan, 1982), some evidence suggests that this task is more strongly related to lefthemisphere, rather than general frontal-lobe, integrity (e.g., Suchy, Sands, & Chelune, 2003). Ruff figural fluency test (RFFT). (Ruff, 1996). Standard procedures outlined in the manual were followed. Participants were presented with five pages, each containing 35 five-dot matrices. They were instructed to connect two or more dots in each matrix, creating as many different designs as possible within one minute on each of the five test pages. The total number of unique designs generated by each participant, and the ratio between unique designs and perseverative errors, were used in the analyses. Performance on this test has been shown to be related to frontal lobe integrity (e.g., Lee, Strauss, Loring, McCloskey et al. (1997). Trailmaking test part B (TMT-B). (War Department, Adjutant General’s Office, 1944). Standard procedures were followed. Participants were asked to draw a line connecting numbers and letters in an alternating sequence (i.e., 1A, 2B, etc.). The time to completion measured in seconds was used in the analyses. TMT-B has been shown to be related to frontal lobe integrity (e.g., Segalowitz et al., 1992). Stroop color and word test; Color-word page. (Golden, 1978). Standard procedures outlined in the manual were followed. The test consists of a 45-second trial. Participants are asked to identify the color of ink in which different color words are printed. The total number of colors correctly identified in 45 Seconds was used as a variable in the analyses. This task involves suppression of interference, and its association with frontal lobe integrity has been well documented (Buchtel, 1987; Pardo, Pardo, Janer, & Raichle, 1990).

12

YANA SUCHY ET AL.

Measures of discriminant validity WAIS-III information subtest. Standard procedures outlined in the manual were followed. The test consists of a series of questions tapping general fund of knowledge. Performance on this test is generally not affected by problems with executive functioning (O’Brien & Lezak, 1981). The total number of correct responses was used as a variable in the analyses. Trailmaking test part A (TMT-A). (War Department, Adjutant General’s Office, 1944). Standard procedures were followed. Participants were asked to draw a line connecting circles containing numbers from 1 to 25. This task is frequently used as a measure of general cognitive efficiency and processing speed (e.g., Suchy et al., 2003). The amount of time needed for completion, measured in seconds, was used as a variable in the analyses. Stroop color and word test, word and color pages. (Golden, 1978). Standard procedures outlined in the manual were followed. The test consists of two 45-second trials. The first entails reading a list of color names and the second involves identifying the color of a series of different colored X’s. These tasks are frequently used as a general measure of cognitive efficiency and processing speed (e.g., Suchy et al., 2003). The total number of words correctly generated in each of the 45-second trials were used as variables in the analyses. PROCEDURES College students volunteered to participate in the study in exchange for credit toward their introductory psychology course work. Middle-aged and older participants were recruited via advertisement and were paid $10 an hour for participation. Participants were tested one at a time in a quiet testing room. Upon arrival, participants underwent standard informed consent procedures, followed by a brief battery of traditional neuropsychological tests designed to examine convergent and discriminant validity. After completing the traditional cognitive tasks, participants completed the computerized administration of the BDS-EV. The computerized battery also included several additional tasks that were being piloted, which included a computerized version of Stroop Color and Word Test, computerized motor fluency task, and two branching tasks. Although instructions for each task were available on the computer screen, the examiner assisted the participants by reading the instructions out loud, being available for questions, providing additional explanations of instructions if needed, and observing the participants during practice trials to detect errors and provide additional explanation of instructions if needed. The entire session typically lasted about two hours, with computerized testing taking about one hour. The portion of computerized testing that comprised the BDS-EV took about 30 minutes to complete. Variables and Score Computations In order to allow computation of composite scores, the data for each variable were converted to z scores. We set out to create four scores: The BDS-EV total

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

13

composite score as a parallel to the BDS-O total score, and three additional composites scores as parallels to the three BDS-O factor scores. Please note that because these three composite scores are at this point not based on factor analysis of the BDS-EV items, but rather are theoretically derived, we will simply refer to them as index composites, not as factor scores. For each of the three index composites, the sum of five variables was used. For the total score, the sum of 15 variables was used. Variables were selected based on conceptual and theoretical understanding of the items on the BDS-O, with the goal of assessing three constructs: (a) Motor learning and programming (MP), (b) environmental independence=capacity for inhibition (EI), and (c) fluid intelligence= working memory (FI). Detailed description of each variable can be found below. Motor programming index composite (MP-EV) The first three variables that comprise the MP-EV composite assess motor learning by examining smoothness, effortlessness, and accuracy of performance on a series of motor learning tasks. As such, these items parallel two motor learning items from the BDS-O. The last two variables in this set assessed the ability to engage in simple tapping without perseveration, paralleling the two tapping items on the BDS-O. The variables comprising the MP composite were as follows: 1. The total number of hand movement errors participants made while learning the four hand movement sequences that comprised the PTT task. This variable assessed the degree to which the participants were able to learn and execute novel sequences. 2. The difference between the expected time to complete a single correct sequence in the PTT block 4 (i.e., the 5-movement sequence) and the actual time needed for completion of a single correct block 4 sequence. This variable assesses the degree to which an increase in sequence complexity interfered with fast and smooth performance. Specifically, we identified each participant’s fastest completion time for a single correct two-movement (i.e., Block 1) sequence. The two-movement sequence is very easy to accomplish and is deemed to have minimal executive load. The fastest sequence completion time represented the best and most automatic (i.e., best learned) performance a participant achieved during this easy block. We then computed a predicted time a participant would require to complete a 5-movement sequence (i.e., Block 4 sequence) if performance were equally smooth as that in Block 1. In other words, we divided the fastest Block 1 time by 2, generating a time needed for a single hand movement, and multiplied it by 5, generating a time needed for five hand movements. We then subtracted this predicted value from the actual fastest completion time for a single correct sequence in Block 4. This difference represents the loss of performance speed as a function of sequence difficulty. Figure 2 demonstrates that with the increase in the complexity of the hand sequence, increasingly more time was required above and beyond the simple execution of well-learned motor behavior. As can be seen in the figure, this time increase was also directly related to an increase in age. In contrast, minimal increase in age was observed in the ‘‘expected time’’ variable (which is a function of performance speed on the 2-movement sequence). 3. The ratio between the median tap-tap latency of the fourth (5-movement sequence) and the second (3-movement sequence) PTT block. Tap-tap latency

14

YANA SUCHY ET AL.

Figure 2 The expected and the actual time to complete progressively more difficult hand movement sequences on the Push-Turn-Tap-tap (PTT) task of the BDS-EV, expressed for three age groups.

refers to the latency between the first and the second tap in the tap-tap movement. This variable assessed the degree to which individual movements (as opposed the entire sequences of movements) became automatic. In particular, even though the specific sequences of hand movements changed from the first to the fourth block, the tap-tap hand movement remained the same. In general, participants’ tap-tap latency decreased by the second block, as they had ample opportunity to practice this movement. However, as the sequence difficulty increased, participants exhibited some slowing on the tap-tap latency. The greater the loss in automaticity due to an increase in task difficulty, the greater the ratio between the fourth and the second tap-tap latency. Mean profile of changes in tap-tap latencies across the four blocks can be seen in Figure 3. 4. The total number of perseverative errors during the tap-tap movement on the PTT task. 5. The total number of perseverative errors during tapping on the DT task. These variables assessed the degree to which participants were able to engage in double tapping without perseveration (i.e., generating more than two taps at a time).

Fluid intelligence index composite (FI-EV) The first four variables that comprised the FI-EV composite assessed explicit working memory ability, i.e., the ability to hold information in mind when asked to do so. As such, they are a parallel to the alphanumeric sequencing item on the BDS-O. The last variable assessed a more implicit component of working memory, i.e., the ability to track details regarding one’s own past performance, and served as a parallel to what has been referred to as ‘‘insight’’ on the BDS-O. The variables comprising the FI-EV composite were as follows: 1. The total time to complete the AS task, divided by the dominant hand finger tapping speed. This variable assessed the speed of task performance while controlling for simple motor speed.

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

15

Figure 3 Mean profile of changes in taptap latencies across the four blocks on the Push-Turn-Tap-tap (PTT) task of the BDS-EV.

2. The difference between the shortest and longest latency between responses on the AS task This variable assessed the degree to which participants experience lapses in working memory. In particular, the shortest latency between responses reflects the minimum amount of time needed for accomplishing a response on this task. This happened when a response was ‘‘easy’’ (i.e., one in which the participant likely knew what the subsequent symbol was and where on the board the symbol was located, as may be the case during the first few responses; the executive load on this response was thus deemed minimal). In contrast, the longest response latency reflected the degree to which participants struggled with either identifying the next symbol or finding the correct key on the board, thus having maximal executive load (i.e., holding and manipulating information in mind across a delay). Figure 4 demonstrates how the shortest response latency increased only slightly with age, whereas the longest latency rapidly increased as a function of age. 3. The number of working memory errors on the AS task. Working memory errors refer to errors in which the participants alternated between a letter and a number, but made an error while doing so (e.g., following the number 3 with the letter D instead of the letter C). These errors are differentiated from perseverative error (i.e., errors in which participants fail to alternate). Unlike perseverative errors, which likely assess disinhibition of a prepotent response set (see the next section), working memory errors simply assess lapses in the capacity to hold and manipulate information in mind correctly.

16

YANA SUCHY ET AL.

Figure 4 The shortest and the longest response latency each participant generated on the Alphanumeric Sequencing (AS) task of the BDS-EV, expressed as means of three age groups. The white portions of the bars represent the mean differences between participants’ longest and shortest latencies.

4. The number of errors on the 2-B task. This variable assesses the ability to hold information in mind across a brief delay and despite distractions. 5. The accuracy of estimate of the number of errors on the PTT task. The participants’ estimates were regressed onto the actual number of errors made, and standardized residuals were recorded. This method of computing accuracy of estimate controls for the expected decrease in accuracy of estimate for individuals who made more errors. This task assesses the ability to correctly track one’s past actions. Environmental independence index composite (EI-EV) The first three variables that comprise the EI-EV composite parallel the go=no-go items on the BDS-O, although the third variable represents a significant elaboration with much increase in difficulty. The last two variables represent additional ways of assessing disinhibition that do not directly parallel any BDS-O items, but that are theoretically expected to tap into the same construct. 1. The total number of mirroring errors on the DT task. This variable assessed the inability to inhibit mirroring responses. 2. The median response latency on the DT task. This variable assesses subtle difficulty with selecting non-mirroring responses, reflected in slower, less efficient responding. 3. The total number of errors on the CGNG task. This variable assesses the inability to inhibit responding under complex circumstances in which rapid responding is reinforced. 4. The total number of perseverative errors on the AS task. Perseverative errors on this task refer to the failure to alternate between letters and numbers, such as proceeding to the number 4 after pressing number 3. This variable assessed the inability to inhibit prepotent response set. 5. The total number of errors on the RT task. This variable assessed the inability to inhibit responding to a previously reinforced stimulus.

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

17

BDS-EV total composite The BDS-EV composite was the sum of the MPEV, FI-EV, and EI-EV composites. Executive composite (EXEC) The EXEC composite was computed by generating z scores for all five traditional measures of executive functioning described under Instruments above, and computing the sum of these scores. Processing speed/cognitive efficiency composite (PS/CE) The PS=CE composite was computed by generating z scores for all three traditional measures of cognitive efficiency described under Instruments above, and computing the sum of these scores.

RESULTS Descriptive Statistics Descriptives for the BDS-EV and BDS-O scores can be found in Tables 2 and 3, respectively. Please note that whereas on the BDS-O higher scores represent better performance, on the BDS-EV better performance is expressed by lower scores. As can be seen in Table 3, the BDS-O scores exhibit a considerable ceiling effect, with a range of scores considerably constricted, particularly at the younger ages. In fact, the EI factor shows no variance among both of the younger age groups. In contrast, as can be seen in Table 2, much broader range of scores occurs with the BDS-EV. However, the data also indicate that the BDS-EV exhibits some skewness, likely due to the fact that on some tasks younger participants made no, or nearly no, errors. Table 2 Means, standard deviations (S.D.), and ranges for the BDS-EV composites College (n ¼ 20)

Middle-Aged (n ¼ 16)

Older (n ¼ 19)

Total (N ¼ 55)

MP-EV Mean SD Range

1.87 1.36 3.72–1.81

.49 2.36 2.84–5.30

2.38 5.18 2.77–15.60

.00 3.81 3.72–15.60

FI-EV Mean SD Range

1.82 1.57 3.52–2.26

1.20 1.33 4.12–2.06

2.93 4.60 2.83–13.06

.00 3.62 4.12–13.06

EI-EV Mean SD Range

1.39 1.66 4.77–1.83

1.20 2.34 4.11–5.49

2.47 4.06 2.75–13.01

.00 3.56 4.77–13.01

BDS-EV-Total Mean SD Range

5.08 2.45 8.44–.30

2.89 2.70 8.12–3.13

7.78 11.29 7.63–39.17

.00 8.94 8.44–39.17

Note. BDS ¼ Behavioral Dyscontrol Scale, MP ¼ Motor Programming, FI ¼ Fluid Intelligence, EI ¼ Environmental Independence, EV ¼ Electronic Version.

18

YANA SUCHY ET AL.

Table 3 Means, standard deviations (S.D.), and ranges for the original BDS scores College (n ¼ 20)

Middle-Aged (n ¼ 16)

Older (n ¼ 19)

Total (N ¼ 55)

MP-Factor Mean SD Range

7.70 .57 6.00–8.00

7.19 1.05 5.00–8.00

6.58 1.80 2.00–8.00

7.16 1.32 2.00–8.00

FI-Factor Mean SD Range

6.70 .57 5.00–7.00

6.75 .58 6.00–8.00

5.79 1.18 3.00–7.00

6.40 .93 3.00–8.00

EI-Factor Mean SD Range

4.00 .00 4.00–4.00

4.00 .00 4.00–4.00

3.63 .96 .00–4.00

3.87 .58 .00–4.00

18.40 .99 15.00–19.00

17.94 1.24 15.00–19.00

16.00 2.70 10–19.00

17.44 2.09 10–19.00

BDS-O-Total Mean SD Range

Note. BDS-O ¼ Behavioral Dyscontrol Scale—Original Version, MP ¼ Motor Programming, FI ¼ Fluid Intelligence, EI ¼ Environmental Independence.

Internal Consistency Reliability Internal consistency is a type of a reliability, such that higher internal consistency is associated with greater score variance, which in turn leads to greater reliability. Analyses yielded high reliability for the BDS-EV Total, MP-EV, and FIEV composites (Cronbach’s alpha ¼ .871, .819, and .773, respectively), and adequate reliability for the EI composite (Cronbach’s alpha ¼ .696). Examination of item total correlations suggested that a removal of the DT Perseverative Errors would improve the alpha value for the MP-EV composite, increasing it to .864, and a removal of the Accuracy Estimate would improve the alpha value of the FI-EV composite, increasing it to .809. However, removal of these items from the total score composite did not lead to an improvement in the score’s reliability (Chronbach’s alpha ¼ .870). Subsequent analyses were conducted both with the theoretically derived composites, as well as with the composites that were adjusted (by removing items) to increase internal consistency (MP-A and FI-A).

Convergent Validity In order to examine whether the BDS-EV scores (a) correspond to the BDS-O scores, and (b) assess similar construct as other measures purported to assess executive abilities, we computed Pearson product correlations among BDS-EV and the BDS-O scores, as well as correlations between the BDS variables and other measures of executive functioning. Correspondence with the BDS-O As can be seen in Table 4, there was a high correlation between the BDS-EV and BDS-O total scores. Additionally, the

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

19

Table 4 Pearson product correlations among BDS-EV and BDS-O variables

BDS-EV total BDS-O MP BDS-O FI BDS-O EI

BDS-O tot

MP-EV

FI-EV

EI-EV

MP-A

FI-A

.816a .849a .668a .598a

.846a .569a .476a .540a

.862a .598a .501a .572a

.775a .270 .426b .632a

.834a .566a .417a .619a

.815a .521a .451a .627a

Note. BDS ¼ Behavioral Dyscontrol Scale, MP ¼ Motor Programming, FI ¼ Fluid Intelligence, EI ¼ Environmental Independence, EV ¼ Electronic Version, O ¼ Original Version, A ¼ Adjusted to improve internal conistency, N ¼ 55. a p < .001. b p < .005.

EI-EV showed a medium to high correlation with the corresponding EI factor of the BDS-O, while being only modestly correlated with non-corresponding BDS-O factors scores. In contrast, MP-EV and FI-EV yielded medium size correlations with all BDS-O factor scores, showing little specificity to the particular factor constructs. The above analyses were repeated using the more internally consistent adjusted BDS-EV scores. The results for these analyses can be found in the two right-most columns of Table 4. As can be seen in the table, adjusting scores to increase reliability did not noticeably improve results. Correspondence with traditional executive measures Convergent validity was also confirmed by medium to high correlations with traditional clinical measures of executive functioning, both singly and when expressed as an executive composite (EXEC). As can be seen in Table 5, the original and the electronic scores

Table 5 Pearson product correlations between BDS variables and traditional measures of executive functioning

BDS-O total BDS-O MP BDS-O FI BDS-O EI BDS-EV total MP-EV FI-EV EI-EV MP-A FI-A

COWA

RFFT-UD

RFFT-ER

STRP-CW

TMT-B

EXEC

.159 .013 .213 .260 .129 .063 .107 .156 .087 .121

.555a .396a .521a .555a .585a .440a .583a .431a .405a .562a

.577a .435a .493a .296b .615a .726a .359a .428a .591a .263

.470a .372a .305b .359a .567a .414a .608a .385a .444a .594a

.692a .581a .464a .427a .820a .756a .787a .478a .732a .697a

.717a .518a .584a .468a .794a .702a .715a .549a .661a .654a

Note. BDS ¼ Behavioral Dyscontrol Scale, MP ¼ Motor Programming, FI ¼ Fluid Intelligence, EI ¼ Environmental Independence, EV ¼ Electronic Version, O ¼ Original Version, A ¼ Adjusted to improve internal consistency, COWA ¼ Controlled Oral Word Association, RFFT ¼ Fuff Figural Fluency Test, UD ¼ Unique Designs, ER ¼ Error Ratio, STRP-CW ¼ Stroop Color Word Page, TMT-B ¼ Trail Making Test Part B, EXEC ¼ Composite of COWA, RFFT, STRP-CW and TMT-B. a p < .001. b p < .005.

20

YANA SUCHY ET AL.

have similar patterns of correlations with other executive measures, further suggesting that both tests tap into a similar construct. Additionally, as can be seen in Table 5, none of the BDS scores were related to phonemic fluency (COWA), consistent with the measure’s minimal linguistic demands. The table also demonstrates that the adjusted scores (bottom of the table) do not appear to be more valid than the theoretically-derived scores (mid-section of the table). Taken together, these results demonstrate impressive convergent validity for the new instrument. Discriminant validity In addition to examining the degree to which the BDS-EV relates to other measures of executive functioning, we examine the degree to which it relates to general fund of knowledge (crystallized intelligence) and to measures of processing speed=cognitive efficiency. Table 6 shows that the BDS (both original and electronic) is at most mildly related to crystallized intelligence, with most of the variance accounted for by the FI score for both the original and the electronic versions. Processing speed appears to be more strongly related to BDS performance, with the variance being comparably distributed among the BDS-O factor scores, and mostly related to the FI composite of the electronic version. Nevertheless, correlations are considerably lower between the BDS scores and processing speed, as compared to the correlations of the BDS with executive measures. Finally, as can be seen in the table, there is no clear advantage of the adjusted scores (bottom of the table) over the theoretically derived scores (mid-section of the table). Because of a lack of evidence that the more reliable adjusted BDS-EV index composites are more valid than the theoretically derived scores, and because of the pilot nature of this study, the remainder of analyses below use the originally developed scores only. Table 6 Pearson product correlations between BDS variables and traditional measures of NONexecutive functioning

BDS-O total BDS-O MP BDS-O FI BDS-O EI BDS-EV total MP-EV FI-EV EI-EV MP-A FI-A

Information

TMT-A

STRP-W

STRP-C

PS=CE

.270a .103 .338a .195 .267a .182 .360b .117 .209 .311a

.352b .193 .290a .363b .442b .253 .505b .346b .321a .530b

.391b .281a .238 .386b .408b .278a .457b .278a .324a .472b

.440b .354b .247 .383b .501b .292a .567b .392b .328b .572b

.482b .337a .316a .461b .550b .335a .622b .414b .396b .641b

Note. BDS ¼ Behavioral Dyscontrol Scale, MP ¼ Motor Programming, FI ¼ Fluid Intelligence, EI ¼ Environmental Independence, EV ¼ Electronic Version, O ¼ Original Version, Information ¼ WAIS-III information subtest, TMT-A ¼ Trail Making Test Part A, STRP-W ¼ Stroop Word Portion, STRP-C ¼ Stroop Color Portion, PS/CE ¼ Composite of MTM-A, STRP-W, and STRP-C. a p < .005. b p < .001.

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

21

Table 7 Partial correlations among executive variables after controlling for age and processing speed=cognitive efficiency (PS/CE)

BDS-O total EXEC

BDS-EV

BDS-O

.728a .632a

— .567a

Note. BDS ¼ Behavioral Dyscontrol Scale, EV ¼ Electronic Version, O ¼ Original Version, EXEC ¼ Composite of four traditional executive measures. a p < .001.

Partial Correlations In order to ascertain that the relationship among the BDS-EV, the BDS-O, and the EXEC cannot be attributed to other variables (such as some non-executive agerelated Process and=or processing speed), we computed partial correlations between the BDS-O, BDS-EV, and EXEC, while controlling for the Ps=ce composite and for age. These analyses led to a slight decrease in correlation coefficients, but demonstrated a continued strong relationship among BDS Variables, as well as between the BDS variables and traditional measures of executive functioning. See Table 7.

Incremental Validity To determine whether the BDS-EV holds promise of improving the clinical utility of the BDS-O, we wanted to conduct a direct comparison of the BDS-EV and the BDS-O in terms of their ability to classify participants. Because this study represents the first examination of the instrument and as such was conducted with healthy participants, and because executive abilities are known to decline with age (e.g., Schretlen et al., 2000) we used age as a proxy for mild decline in executive abilities. Specifically, we examined whether the BDS-EV can improve classification of participants into age groups above and beyond classifications accomplished by the BDS-O. However, because processing speed is also known to decline with age (Schretlen et al.), and because our results show that both versions of the BDS have a mild to moderate correlation with processing speed, we controlled for processing speed as well. Specifically, we conducted a series of multinomial logistical regressions, using age group (college, middle-aged, and older) as the dependent variable, and PS=CE, BDS-O, and BDS-EV as predictors. The results showed that when PS=CE alone was entered into the model, it was a significant predictor of age group membership, chi-square (2) ¼ 21.41, P < .001. This model classified correctly 56.40% of the participants. When BDS-O was entered as a second predictor into the above model, it accounted for additional age-related variance above and beyond PS=CE, chi-square (2) ¼ 12.44, p ¼ .002, while reducing somewhat the PS=CE contribution, chi-square (2) ¼ 13.80, p ¼ .001. This model classified correctly 60.00% of the participants. When BDS-EV was entered as a third variable into the above model, it accounted for age-related variance above and beyond both of the other two variables, chi-square (2) ¼ 16.03, p < .001, while suppressing the contribution of PS=CE

22

YANA SUCHY ET AL.

CE and BDS-O, chi-square (2) ¼ 6.42, p ¼ .004, and chi-square (2) ¼ .226, p ¼ .893. This model classified correctly 69.10% of participants, demonstrating the incremental validity of the BDS-EV in this situation. The above analyses demonstrate that the BDS-EV appears to be a better predictor of age-related decline in executive abilities than BDS-O. However, because the BDS-EV takes longer to administer than the BDS-O, demonstrating that it represents a better predictor does not adequately address the instrument’s clinical utility. In other words, it is not surprising that a longer battery should perform better than a shorter one. For that reason, we wanted to examine whether the BDS-EV could outperform not only the BDS-O, but also a battery of traditional executive measures that takes approximately the same amount of time to administer as the BDS-EV. To that end, we tested a model in which we examined whether the BDS-EV would account for additional variance above and beyond the PS=CE and EXEC. The results showed that BDS-EV accounted for variance above and beyond the other two variables, chi-square (2) ¼ 12.91, p ¼ .002. This model classified correctly 65.5% of participants, with BDS-EV emerging as the only significant predictor. DISCUSSION The present study represents the first examination of the feasibility of converting the original Behavioral Dyscontrol Scale (BDS-O) into an electronically administered instrument. The impetus for this project was twofold: On the one hand, prior research has demonstrated that the BDS-O is a promising instrument that has an excellent ecological validity when used with geriatric patients and has shown promise with TBI patients as well. On the other hand, the BDS has not enjoyed as wide a use in clinical settings as one would expect, in part likely due to its reliance on a somewhat subjective scoring system, and in part due to its somewhat narrow range of scores (which limits its use with younger populations). The present article represents the first introduction of the BDS-EV, which is being developed with the aim of improving the shortcomings of the BDS-O,while retaining its strengths. The first step in the present study was to determine whether the items that have been theoretically selected to comprise the BDS-EV have reasonable internal consistency to warrant generating total and index composite scores. In general, the results demonstrated very good reliability, with the Cronbach’s alpha values ranging approximately from .70 to .87. The lowest reliability was yielded by the EI index composite, likely due to a greater variety of tasks comprising this score. Although removal of some items further improved the reliability of MP and FI composites, item removal did not seem to improve the score validity. Given that this is the first examination of this instrument, and that the removal of these items did not improve the reliability of the total score, removal of these items without a theoretical justification is not warranted at this time. Following the demonstration of adequate reliability, we proceeded to test the BDS-EV construct validity. We found that the BDS-EV total score is reasonably comparable to the BDS-O total score. In particular, the BDS-EV correlates highly with the original test, and its pattern of correlation coefficients with other neuropsychological instruments is comparable to that exhibited by the BDS-O. Additionally, the BDS-EV correlates highly with the BDS-O even after age and processing speed

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

23

have been controlled for. This result suggests that the two BDS instruments likely measure the same construct (presumably executive abilities). In addition to providing initial support for the construct validity of the BDSEV, the present study also provided support for incremental validity of this new instrument. In particular, the BDS-EV was able to account for age-related changes in performance above and beyond processing speed and BDS-O, as well as above and beyond processing speed combined with traditional measures of executive processing. As such, the BDS-EV promises to be equally demanding in terms of administration time as compared to a traditional way of assessing executive abilities, while promising to provide greater ease of administration along with greater accuracy of patient classification. These initial findings, however, need to be replicated with a variety of patient populations before more definite conclusions can be drawn. Compared to the very encouraging findings regarding the BDS-EV total composite, less satisfying results emerged when examining the BDS-EV index composites. Specifically, while index composites showed a reasonable convergent validity with the corresponding BDS-O factor scores, they did not demonstrate discriminant validity with respect to the non-corresponding BDS-O factors. One exception to this was the EI-EV composite, which showed only mild correlations with the noncorresponding BDS-O factors. This latter finding is encouraging, especially given that the EI factor score was the one with most ceiling effects and range constriction problems on the original instrument. In fact, examination of Tables 5 and 6 reveals that the electronic EI, as compared to the original EI, shows both higher correlations with executive measures, and lower correlations with non-executive measures, suggesting an improvement in terms of construct validity over the original instrument. However, these conclusions need to be viewed tentatively, especially given the highly skewed score distribution of the original EI factor scores (see Table 3). There are several ways in which the BDS-O and the BDS-EV are not directly parallel, which may have affected the present results. First, the BDS-EV includes measures of speed, whereas the BDS-O is untimed. Additionally, speed of information processing is not weighted equally among different BDS-EV index scores, with FI-EV having the greatest speed load. This likely explains why FI-EV (as compared to the BDS-O-FI) shows a stronger relationship to Stroop Color and Word Test and to the processing speed composite (Tables 5 and 6, respectively). Second, whereas the BDS-EV index scores are comprised of an equal number of items each, the BDS-O factors are comprised of between two and four items. Therefore, the relative contributions of individual abilities to the total BDS scores differ for the two instruments. In other words, the BDS-O total score relies much more heavily on motor learning and programming abilities than does the BDS-EV total score. Third, a more thorough assessment of insight occurs with the BDS-O than with the BDS-EV. Future research may examine the implications of this by introducing a more global insight assessment as a BDS-EV variable. Because of these differences, future research should not focus on improving the correlations between the BDS-EV and the BDS-O. Rather, future work with the BDS-EV should focus on demonstrating its ability to tap into the three constructs (MP, FI, EI), and on demonstrating that performances on the three composite scores may be differentially related to behavioral outcomes in various neurologic populations. In other words, the construct validity of the three index scores needs to be

24

YANA SUCHY ET AL.

re-examined with a larger sample using both exploratory and confirmatory factor analyses; and the ecological validity needs to be examined in patient populations. The differences between the two instruments also serve as a reminder that the two measures should not be viewed as interchangeable. Depending on the findings of future validation studies, we may find it useful to re-conceptualize the BDS-EV as an instrument that has been inspired by, rather than being psychometrically parallel to, the BDS-O. Such re-conceptualization may also warrant generating a new original name for the BDS-EV. There are several weaknesses in the present study that should also be noted. First, the present study did not use an alternative scoring system that has been previously developed for the BDS-O and that somewhat increases the BDS-O score variance. This omission might be offered as an explanation of the limited comparability between the factor and the index scores. However, it is more likely that increasing the BDS-O variance would have simply resulted in increasing the size of the correlation coefficients between the BDS-O and other executive measures (whereas in the current article, the BDS-EV exhibits somewhat higher correlation coefficients than the BDS-O). The second weakness of the present study is that the BDS-EV does not allow for controlling for pure visual-spatial scanning, and thus does not allow us to control for the degree to which the AS task performance might have been affected by deficits in this functional domain. In future refinements of the BDS-EV, we should add a baseline task that controls for this ability. Third, the present results need to be considered tentative, given the modest sample size. And finally, classification of participants into age groups should not be misconstrued as comparable to patient classification. It is possible, for example, that patients will exhibit a very different profile of scores than that exhibited by older participants. The above weaknesses notwithstanding, the present results demonstrate that electronic version of the BDS holds promise and warrants further investigations and development. In particular, validation studies with a variety of patient populations, as well as reliability studies with larger samples, need to be conducted. Should results continue to be positive, development of demographically corrected norms should follow. At the present time, however, the instrument is clearly not ready for clinical use. ACKNOWLEDGEMENTS The study was approved by University of Utah IRB. We wish to thank Jim Grigsby for graciously providing us with the BDS-O materials, and for taking time to review our draft of the BDS-EV tasks. REFERENCES Buchtel, H. A. (1987). Attention and vigilance after head trauma. In H. S. Levin, J. Grafman, & H. M. Eisenberg (Eds.), Neurobehavioral recovery from head injury (pp. 372–378). New York: Oxford University Press. Ecklund-Johnson, E. P., Miller, S. A., & Sweet, J. J. (2004). Confirmatory factor analysis of the Behavioral Dyscontrol Scale in a mixed clinical sample. Clinical Neuropsychologist, 18(3), 395–410.

BDS-ELECTRONIC VERSION: FIRST EXAMINATION

25

Golden, C. J. (1978). Stroop color and word test. Wood Dale, IL: Stoelting Company. Grigsby, J. & Kaye, K. (1996). The behavioral dyscontrol scale: manual (2nd ed.). Denver, CO: Authors. Grigsby, J., Kaye, K., Baxter, J., Shetterly, S. M., & Hamman, R. F. (1998). Executive cognitive abilities and functional status among community-dwelling older persons in the San Luis Valley Health and Aging Study. Journal of the American Geriatrics Society, 46(5), 590–596. Grigsby, J., Kaye, K., Eilertsen, T. B., & Kramer, A. M. (2000). The Behavioral Dyscontrol Scale and functional status among elderly medical and surgical rehabilitation patients. Journal of Clinical Geropsychology, 6(4), 259–268. Grigsby, J., Kaye, K., Kowalsky, J., & Kramer, A. M. (2002a). Association of behavioral selfregulation with concurrent functional capacity among stroke rehabilitation patients. Journal of Clinical Geropsychology, 8(1), 25–33. Grigsby, J., Kaye, K., Kowalsky, J., & Kramer, A. M. (2002b). Relationship between functional status and the capacity to regulate behavior among elderly persons following hip fracture. Rehabilitation Psychology, 47(3), 291–307. Grigsby, J., Kaye, K., & Robbins, L. J. (1992). Reliabilities, norms and factor structure of the Behavioral Dyscontrol Scale. Perceptual & Motor Skills, 74(3, Pt. 1), 883–892. Kaye, K., Grigsby, J., Robbins, L. J., & Korzun, B. (1990). Prediction of independent functioning and behavior problems in geriatric patients. Journal of the American Geriatrics Society, 38(12), 1304–1310. Leahy, B., Suchy, Y., Sweet, J. J., & Lam, Chow S. (2003). Behavioral Dyscontrol Scale deficits among traumatic brain injury patients, Part I: Validation with nongeriatric patients. Clinical Neuropsychologist, 17(4), 474–491. Lee, G. P., Strauss, E., Loring, D. W., McCloskey, L., Haworth, J. M., Lehman, R. A. W. (1997). Sensitivity of figural fluency on the Five-Point test to focal neurological dysfunction. Clinical Neuropsychologist. 11(1), 59–68. Miller, E. (1984). Verbal fluency as a function of a measure of verbal intelligence and in relation to different types of cerebral pathology. British Journal of Clinical Psychology, 23, 53–57. Newman, J. P. & Kosson, J. S. (1986). Passive avoidance learning in psychopathic and nonpsychopathic offenders. Journal of Abnormal Psychology, 95, 252–256. O’Brien, K. & Lezak, M. D. (1981, July). Long-term improvement in intellectual function following brain injury. Paper presented at the Fourth European Conference of the INS, Bergen, Norway. Pardo, J. V., Pardo, P. J., Janer, K. W., & Raichle, M. E. (1990). The anterior cingulate cortex mediates processing selection in the Stroop attentional conflict paradigm. Proceedings of the National Academy of Sciences, 87, 256–259. Pendleton, M. G., Heaton, R. K., Lehman, R. A. W., & Hulihan, D. (1982). Diagnostic utility of the Thurstone Word Fluency Test in neuropsychological evaluations. Journal of Clinical Neuropsychology, 4, 307–318. Ruff, R. M. (1996). Ruff Figural Fluency test: Professional Manual. Odessa, FL: Psychological Assessment Resources, Inc. Schretlen, D., Pearlson, G. D., Anthony, J. C., Aylward, E. H., Augustine, A. M., Davis, A., et al. (2000). Elucidating the contributions of processing speed, executive ability, and frontal lobe volume to normal age-related differences in fluid intelligence. Journal of the International Neuropsychological Society, 6(1), 52–61. Segalowitz, S. J., Unsal, A., & Dywan, J. (1992). CNV evidence for the distinctiveness of frontal and posterior neural processes in a traumatic brain-injured population. Journal of Clinical and Experimental Neuropsychology, 52, 658–662.

26

YANA SUCHY ET AL.

Spreen, O. & Strauss, E. (1991). A compendium of neuropsychological tests. New York: Oxford University Press. Suchy, Y., Blint, A., & Osmon, D. S. (1997). Behavioral Dyscontrol Scale: Criterion and predictive validity in an inpatient rehabilitation unit population. Clinical Neuropsychologist, 11(3), 258–265. Suchy, Y. & Bolger, J. (1999). The Behavioral Dyscontrol Scale as a predictor of aggression against self or others in psychogeriatric patients. The Clinical Neuropsychologist, 13, 487–494. Suchy, Y., Leahy, B., Sweet, J. J., & Lam, C. S. (2003). Behavioral Dyscontrol Scale Deficits among traumatic brain injury patients, Part II: Comparison to other measures of executive functioning. Clinical Neuropsychologist, 17(4), 492–506. Suchy, Y., Sands, K., & Chelune, G. J. (2002). Verbal and nonverbal fluency performance before and after seizure surgery. Journal of Clinical & Experimental Neuropsychology, 25(2) 398–411. War Department, Adjutant General’s office (1944). Army Individual Test Battery. Washington, DC: Author.