A multidimensional approach towards malingering ... - Oxford Journals

3 downloads 0 Views 104KB Size Report
instruments used to detect deception (Frederick, Sarfaty, Johnston, & Powel, 1994; ... repeated measures (Greiffenstein et al., 1994; Guilmette, Hart, Giuliano, ...
Archives of Clinical Neuropsychology 17 (2002) 143 – 156

A multidimensional approach towards malingering detection Lori A. Holmquista,*, Richard L. Wanlassb,1 a

California School of Professional Psychology, Fresno, CA, USA Department of Physical Medicine and Rehabilitation, University of California-Davis Medical Center, 4860 Y Street, Suite 1100, Sacramento, CA 95817, USA

b

Accepted 2 October 2000

Abstract A validation study using 62 subjects was conducted on the Multidimensional Investigation of Neuropsychological Dissimulation (MIND), a new neuropsychological instrument used to detect exaggeration of brain-injury symptoms. This instrument has eight scoring indices that use multiple, empirically based strategies to detect poor effort. Discriminant function analysis was used to classify three groups of subjects: normals responding in a sincere manner (N = 24); normals who were educated about mild to moderate head injuries and given substantial incentives to malinger without obvious detection (N = 21); and clinically diagnosed, brain-injured patients with mild to moderate impairments (N = 17). A univariate F test indicated significant group differences on six of the eight original predictor variables. Using these six variables, there was an overall classification rate of 68%, reflecting only a 10% false negative rate in the dissimulating group. For a two-group classification (i.e., dissimulating and mildly to moderately brain-injured subjects), an 82% overall accuracy rate was achieved. The MIND appears to have potential for identifying individuals exaggerating mild to moderate neuropsychological deficits. D 2001 National Academy of Neuropsychology. Published by Elsevier Science Ltd. Keywords: Malingering; Neuropsychological dissimulation

Neuropsychological testing is being used with increasing frequency in medical–legal cases. The validity of neuropsychological testing is dependent upon sincere and active

* Corresponding author. Department of Psychiatry and Behavioral Sciences, University of Oklahoma Health Sciences Center, Room 2308, 940 NE 13th Street, Oklahoma City, OK 73104, USA. Tel.: +1-405-271-4401. E-mail addresses: [email protected] (L.A. Holmquist), [email protected] (R.L. Wanlass). 1 Also corresponding author. 0887-6177/01/$ – see front matter D 2001 National Academy of Neuropsychology. PII: S 0 8 8 7 - 6 1 7 7 ( 0 0 ) 0 0 1 0 6 - 2

144

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

participation by the examinee. In cases with incentive for poor performance, there is the potential for response bias or malingering (i.e., the intentional exaggeration or feigning of impairments for secondary gain). It has been estimated that each year in the United States, 2 million head injuries occur, with most of these being minor (Kraus & McArthur, 1995). This mild head-trauma population may be most at risk for feigning neuropsychological deficits. Studies (Binder, 1993; Binder & Rohling, 1995; Binder & Willis, 1991; Trueblood & Schmidt, 1993) have shown that, given the presence of financial incentives, people with mild head injuries perform worse on some measures of effort or sincerity than those with severe head trauma. To facilitate accurate identification of legitimate mild head-injury complaints, there has been an increase in the development of objective assessment techniques for the detection of malingering. Two main strategies characterize this research. The first involves evaluating the examinee’s performance on current neuropsychological tests with what is known about typical brain functioning. Studies have shown that suspected malingerers tend to reveal different patterns of incorrect responses than actual brain-injured patients (Bernard, McGrath, & Houston, 1993; Goebel, 1983; Greiffenstein, Baker, & Gola, 1994; Heaton, Smith, Lehman, & Vogt, 1978; Martin, Bolter, Todd, Gouvier, & Niccolls, 1993; Mittenberg, Azrin, Millsaps, & Heilbronner, 1993). A second approach is the development of unidimensional measures specifically designed to detect specific deception strategies (Bectar & Williams, 1995; Binder, 1993; Gudjonsson & Shackleton, 1986; Iverson & Franzen, 1994; Wiggins & Brandt, 1988). For example, the theoretical basis for tests such as the Portland Digit Recognition Test and Rey 15-Item Test is that malingerers will tend to misjudge the level of actual difficulty of the test and perform more poorly than severely brain-injured patients. Schretlen (1988) suggests that supplying information regarding the condition to be faked is a useful experimental strategy that is likely to increase the plausibility of faked test results. Rogers (1984) made specific recommendations for improving research design in this domain: (a) Subjects who are asked to malinger should be given an incentive for success, such as a financial award. (b) Simulation instructions (i.e., specific symptomology) must be precise and should emphasize the believability of simulated behavior. (c) Subject compliance must be studied with a debriefing interview. (d) Discriminant functions and other multivariate techniques should be used to produce standardized indicators of dissimulation. There have been a few studies exploring the effects of coaching on neuropsychological instruments used to detect deception (Frederick, Sarfaty, Johnston, & Powel, 1994; Martin et al., 1993). These investigators found that subjects coached in strategies to reduce detection as dissimulators performed above chance levels on forced-choice measures and obtained higher scores than subjects who did not receive information. Recent literature reviews (Iverson, Franzen, & McCracken, 1991; Mills & Putman, 1995; Niles & Sweet, 1994) of the more common instruments presently used to detect dissimulation in mild brain-injury cases have concluded that an integrated approach toward the detection of malingering is needed. To date, there are no neuropsychological instruments developed that simultaneously measure several deception strategies. The purpose of this study was to validate a new multidimensional instrument developed specifically to measure performance patterns of suspected malingerers — the Multidimen-

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

145

sional Investigation of Neuropsychological Dissimulation (MIND). This study examines whether sophisticated dissimulators produce a pattern of performance on the MIND inconsistent with actual head-trauma patients. To control for external and internal validity issues, this study followed methodological recommendations made by Rogers (1984) and Schretlen (1988).

1. Method 1.1. Subjects Normal subjects consisted of 47 college undergraduates. They were randomly separated into either a control or dissimulating group. The normal subjects received course credit for participation. To be included in the study, students were required to have normal vision and/or be wearing glasses if their vision was impaired. All had no history of mental or neurological illnesses or developmental disorders, and no prior work-related exposure to the brain-injured population. The brain-injured sample consisted of 17 mildly to moderately brain-injured subjects. Severity was defined as length of unconsciousness not exceeding 60 min (Lezak, 1995). All subjects’ length of unconsciousness was less than 30 min in duration. The mean length of time since injury was 3.2 years. None of the subjects reported experiencing posttraumatic amnesia exceeding a few minutes. Twelve of the 17 subjects (71%) were injured in a motorvehicle accident. Eighteen percent of the injuries involved falls, and 11% involved an assault. There were no subjects actively involved in litigation, however, most sought legal action, which was settled prior to the study. At the time of testing, 41% of the subjects were working full time, 29% were on disability, 30% were unemployed, and none were working part time. 1.2. Instruments The MIND (Thomas & Wanlass, 1994) was designed as an attempt to synthesize several of the existing strategies of malingering detection into one neuropsychological instrument. There are eight scoring indices that incorporate one or more of the common theoretically based deception strategies (Forced-Choice, Grouped vs. Ungrouped, Split, Similarity, Sequence, Consistency, Learn, and Recall). In the field of malingering research, several consistent patterns of performance by simulated malingerers or mild head-injury patients in litigation have been identified. Forced-choice recognition tasks and instruments that incorporate various levels of item difficulty are just two examples of methodology used. Where forced-choice formats deal with response probabilities, the item difficulty format relies on the assumption that people will overestimate the impairment required on certain tasks. The theory is based on the idea that malingerers will misjudge the level of actual difficulty of the test and perform more poorly than severely impaired patients. In actual clinical settings, the difficulty levels required on these tasks are minimal but are presented as difficult. This method of malingering detection has proven to be useful (Bernard & Fowler, 1990; Binder & Willis, 1991). Another proven

146

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

method observes for discrepancies in recognition and recall tasks, and consistency across repeated measures (Greiffenstein et al., 1994; Guilmette, Hart, Giuliano, & Leininger, 1994; Slick, Hopp, Strauss, Hunter, & Pinch, 1994). Identifying inconsistent performance patterns is another strategy used to identify potential malingerers. The theory behind this strategy is that malingerers tend to reveal different patterns of incorrect responses than subjects with brain injury. Convincing patterns of impaired test performance may be more difficult to fake across several measurements (Bernard et al., 1993; Goebel, 1983; Greiffenstein et al., 1994; Mittenberg et al., 1993). The MIND is divided into six parts. The first three parts are identical to the last three parts with two exceptions. First, the same cards are presented in a different order. Second, given that some malingerers are susceptible to suggestion, the individual is told that this portion of the test may be more difficult, when in reality, it is not. This strategy has accurately discriminated poor motivation subjects from high motivation subjects in previous research (Binder, 1993). The first and second halves of the instrument contain 20 pages, each with different icons. Sixteen of the 20 pages have a random layout, whereas the remaining four pages are symmetrical to facilitate counting. We ask that the examinee look at the icons and tell the examiner whether the total number of icons on the page is an odd or even number. The examiner records the odd or even response for each stimulus card, along with the total time from stimulus exposure to the examinee’s response. The randomly arrayed cards should result in the examinee taking more time to count the number of items than would be required to count the same number of items in a symmetrical pattern. The time to respond should also show a gradual increase as the number of icons increases. The response time for each icon and the total response time are compared to the examinee’s response time in Part IV. Since Part IV is the same as Part I, the response times should decrease due to a practiced effect. In the second and fifth part of the exam, we ask that the examinee recall as many icons as possible of those presented in Parts I or IV. There is a 5-min time limit to recall or draw as many icons as possible. The icons can be drawn or verbally recalled. In the third and sixth part of the exam, we show the examinee 20 different sets of paired icons. Each stimulus card contains one set. One icon on each card has been previously seen in the first part of the exam, while the other has never been seen by the examinee. The examinee is asked to choose which of the two icons appeared earlier in the first part of the protocol. The examinee is looking for inconsistencies in recognition vs. recall. Since recall is more difficult than recognition, the items recalled in Parts II or V should be easily recognized in Parts III or IV. Additionally, one would also expect to see recognition higher than recall since the former is an easier task (see Appendix A for the eight indices’ scoring protocol). 1.3. Design and procedure Before testing, we requested that the undergraduate subjects complete a brief demographic questionnaire and sign an informed consent form. Any undergraduate subject who reported a history of head injury producing a loss of consciousness, mental disorder, and/or work-related experience with the head-injured population was excluded from the study.

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

147

The students were randomly assigned to one of two groups, normal or dissimulating. Both groups received information packages for experimental control. The package for the normal controls contained an information sheet describing the testing process, without specifically addressing the issue of malingering. Subjects were asked to review this information prior to their scheduled testing session and were requested to perform to the best of their ability on testing. The dissimulating group also received an instructional set that described their role. Dissimulating subjects were asked to pretend they were involved in litigation to determine how much financial compensation they would obtain from the party responsible for a motor-vehicle accident. They were told to imagine that they did not notice any difference in their mental or physical functioning as a result of the accident. However, the dissimulating subjects were to picture themselves as laborers presently working on a temporary assignment. They were to feel that they were justified in faking injuries and deserved all the money the courts would allow. It was explained that their test results would help determine how large their settlement would be. They were encouraged to fake a mild to moderate head injury without making it obvious to the examiner. The dissimulating group received specific information on common mild to moderate headinjury symptomology. The symptom information consisted of materials about minor head trauma available to the public. We obtained this type of information through a local support group and rehabilitation center. The instruction sheet also informed them that the person who demonstrated the most realistic portrayal of an actual brain-injured patient would receive US$100 after the results had been collected and analyzed. We gave them approximately 1 week to develop a strategy. To assess comprehension of the information packet on minor head trauma, a pretest was administered. There were no subjects who scored less than 80% correct. All subjects were thus included in the study. The senior author or trained psychology graduate students administered the MIND individually in a quiet room. All subjects were told nothing about the instrument beyond what examiners usually tell to real patients. The administrators stated that they would test them for half an hour, that the test has pictures that we would ask them to count and remember, and that we commonly administer the test to patients with head injuries. The instrument was presented as difficult when, in fact, there are only three simple concepts (counting, recall, and recognition). Following completion of the test, dissimulating subjects were interviewed regarding their strategies. During the debriefing interview, two dissimulating subjects admitted that they did not put forth any effort into feigning a mild to moderate brain injury. Therefore, both subjects were eliminated from the study. The statistics for this study was conducted in two blocks of procedures. The first block focused on hypothesis testing, and the second block examined a new scoring index and protocol. To test the difference between groups and discriminant validity of the MIND, a sample of 62 subjects was used. To test the hypotheses for this study, a one-way analysis of variance (ANOVA) with eight orthogonal contrasts was used. Dissimulators were compared with normals and brain-injured subjects on each scoring index. A test for homogeneity of variance was examined.

148

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

2. Results 2.1. Data analysis All of the normal subjects reported that they gave their best effort on the test administered. Sixty-two percent of the dissimulating subjects achieved all correct 10 items on the pretest, and 38% answered incorrectly only one or two items. These results indicate that they had a clear understanding of the information packet on head-injury sequelae. Table 1 shows the demographic characteristics of the three subject groups. One-way ANOVAs revealed significant differences among subjects in normal, experimental, and patient groups on age but not on education. The brain-injured group was significantly older than the normal and dissimulating groups, F(2,59) = 9.07, P < .0004. A chi-square procedure revealed significant proportional differences for ethnicity ( P < .0000) but not for gender ( P < .1281). 2.2. Validation of scoring indexes To determine how the predictor variables discriminated among groups, the sample means were examined by the protocol recommended by Tabachnick and Fidell (1989). The total N of 63 was reduced to 62 with the deletion of one case with multivariate outliers at P < .001. This outlying case was in the normal group. Results of evaluation of assumptions of normality, homogeneity of variance–covariance matrices, linearity, and multicollinearity were unsatisfactory. The results of the Box’s M test revealed violation of equality of group variances (Box’s M 188.89, P < .000). The cells with the smallest sample size (brain injured) produced larger variance–covariance than the largest sample groups (normal and dissimulating). The results suggest heterogeneity in the brain-injured group. As recommended by Tabachnick and Fidell, if sample sizes are unequal and Box’s M test is significant, Pillai’s criterion should be used instead of Wilk’s Lamba to evaluate multivariate significance. The results show multivariate significance at P < .000. A multivariate analysis of covariance was performed on the eight scoring indices with age as the independent variable. With the use of Pillai’s criterion, the combined dependent variables were not significantly related to age, F(8,47) = 0.60, P > .05. There was no significant association between each scoring index and age. Descriptive statistics and analysis of group differences on the eight scoring indices are summarized in Table 2. There was a significant difference in mean Forced-Choice scores Table 1 Descriptive statistics of subject demographics Measure Group

Age [mean (S.D.)]

Race [white/non-white]

Education [mean (S.D.)]

Gender [male/female]

Normals Brain injured Dissimulators

26.2 (11.8) 38.8 (14.3) 22.9 (5.9)

17/07 13/04 12/09

14.5 (1.2) 13.8 (1.9) 14.3 (1.2)

09/15 10/07 04/17

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

149

Table 2 Mean group scores and Scheffe’s test comparisons of eight scoring indices Normals [mean (S.D.)] FC G vs. UG Split SEQ SIMIL CONSIS Learn Recall

72.88 0.40 0.38 4.40 4.36 1.48 18.24 18.08

(4.10) (.58) (8.08) (1.29) (1.99) (1.36) (4.38) (4.70)

Brain injured [mean (S.D.)] 72.59 (5.52) 0.76 (.97) 7.59 (19.77) 5.29 (1.26) 5.06 (1.82) 2.35 (2.03) 22.47 (5.33) 14.18 (6.30)

Dissimulators [mean (S.D.)] 62.10 1.71 12.00 5.38 5.57 2.43 17.57 13.43

(8.87)a (1.35)a (28.48)b (1.88) (1.96) (2.20) (3.92)a (4.40)

FC = Forced-Choice; G vs. UG = Grouped vs. Ungrouped; SEQ = Sequence; SIMIL = Similarity; CONSIS = Consistency. a Mean group score significantly different than that of normals and brain injured. b Mean group score significantly different than that of brain injured.

among the three groups, F(2,59) = 20.70, P < .05. Further, dissimulators scored significantly lower than the brain-injured group and normal group. There was a significant difference in mean Grouped vs. Ungrouped scores among the three groups, F(2,59) = 10.39, P < .05. Further, dissimulators scored significantly higher than the brain-injured and normal groups. There was a significant difference in mean Split scores among all three groups, F(2,59) = 4.00, P < .05. Further, dissimulators took significantly more time to count the second set of similar icons, whereas the brain-injured group took less time. The normal group was consistent in both halves. There was a significant difference among all three groups on the Sequencing variable, F(2,59) = 3.72, P < .05. The dissimulating and brain-injured groups scores were significantly worse than the normal group. The brain-injured sample did poorly on this task, which was unexpected. There was no significant difference among all three groups on the Similarity scoring variable, F(2,59) = 2.70, P < .05. The brain-injured, normal, and dissimulating groups performed similarly. The brain-injured and dissimulating groups exceeded a 20% difference in the time it took to complete Parts I and IV. There was no significant difference among all three groups on the Consistency variable, F(2,59) = 2.52, P < .05. The brain-injured group performed similarly to the normal and dissimulating groups. It appears that the brain-injured group was inconsistent in recognizing the icons that they recalled in earlier sections. These results were unexpected. There was a significant difference among all three groups on the free Recall variable, F(2,59) = 6.09, P < .05. The dissimulating and brain-injured group performed significantly worse than the normals. However, there was no significant difference between the dissimulating and brain-injured groups’ free recall performance. There was a significant difference in mean Learn scores among all three groups, F(2,59) = 6.44, P < .05. The Learn score was calculated by subtracting each subject’s total recognition score from their total free Recall score. The dissimulating group performed similarly to the normal group. The brain-injured group’s performance was significantly different than both the normal and dissimulating groups’. After investigating the recognition scores by group, the normals (M = 36.5) and brain-injured (M = 36.2) performed similarly. However, the dissimulators’ performance was significantly worse (M = 31.0). When these

150

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

Table 3 Pooled within-groups correlations between discriminating variables and canonical discriminant functions Scoring indices

Function 1

Forced-Choice Grouped vs. Ungrouped Learn Sequence Recall Split

Function 2

.77 .56 .22 .21 .29 .27

.09 .23 .73 .54 .67 .45

results are compared to the dissimulators’ free recall performance, it suggests that the dissimulators misjudged the level of difficulty on a task that brain-injured people tend to perform well on. 2.3. Discriminant analysis Univariate F tests indicate significant differences for the Forced-Choice, Grouped vs. Ungrouped, Learn, Recall, Sequence, and Split indices. A direct discriminant analysis was performed using these six scoring indices as predictors of membership in three groups. The analysis was based on the scores of 62 subjects. Groups were normal controls, mildly to moderately brain-injured patients, and normals instructed to feign a mild to moderate head injury. A pooled within-groups variance – covariance matrix was used for the discriminant function. This resulted in two significant discriminant function coefficients that were strongly related to the predictor variables scores. The first set of function coefficients accounted for 83% and the second, 15% of the variance between groups. Two of the six variables were correlated with discriminant Function 1 and best-discriminated dissimulators from brain-injured and normals. Refer to Table 3 for a complete list of the discriminant variables. 2.4. Classification accuracy Table 4 contains the discriminant function analysis results for the MIND and shows a 68% overall correct classification rate, with 76% of the dissimulators correctly identified by the combined scores among the six scoring indices. Only two dissimulators were misclassified as brain injured. Three brain-injured subjects were misclassified as dissimTable 4 Classification results Predicted group membership Actual group

No. of cases

1 (%)

2 (%)

3 (%)

Group 1 (normals) Group 2 (dissimulators) Group 3 (brain injured)

24 21 17

83.3 14.3 47.1

0.0 76.2 17.6

16.7 9.5 35.3

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

151

Table 5 Classification results — six original variables Predicted group membership Actual group

No. of cases

1 (%)

2 (%)

Group 1 (dissimulators) Group 2 (mild to moderate brain injury)

21 17

81.0 17.6

19.0 82.4

ulators, and eight were misclassified as normals. This reflects only a 10% false negative rate in the dissimulating group. Since the instrument was designed to discriminate sincere performance of actual mild to moderate head-trauma patients from feigned mild to moderate brain injury, a new discriminant function analysis was performed using only two groups (mildly to moderately braininjured and dissimulators). Table 5 contains the new results. We achieved an 82% overall classification accuracy. Eliminating the normal group from the analysis did not reduce the number of false positives in the brain-injured group. However, it did decrease the false negative rate in the dissimulating group. In the first discriminant function analysis, three dissimulating subjects were misclassified as normal. In the second discriminant function analysis, one of these individuals was correctly classified and the remaining two were misclassified as brain injured. 2.5. Debriefing interview Upon completion of the test, the dissimulating subjects were interviewed concerning their performance. Interestingly, four subjects (10%) admitted that, despite the instructions and the incentives, they did not try to fake on this test. This is consistent with other studies (Goebel, 1983; Heaton et al., 1978) in which posttest interviews revealed that approximately 10–20% of the normal subjects instructed to feign a brain injury did not try to fake, because they considered themselves too honest or simply chose not to fake. Approximately 90% understood that recognition is easier than recall. However, the results suggested that the dissimulating subjects had a difficult time remembering how well they performed on the recognition vs. recall sections. Consequently, they produced an inconsistent performance pattern. Forty percent reported attending to the serial position of the items recalled, in particular choosing to recall items most recently seen. Fifty percent showed that they were slow to perform, believing that, brain-injured patients would fatigue early into the test. Approximately 70% reported that they gave intentional wrong responses, with 30% opting to respond randomly and 40% developing a systematic method involving when or how to respond incorrectly throughout the protocol. Ninety percent of the dissimulating subjects said they could not remember how they responded well enough in the first half to replicate it in the second half. Approximately 50% mentioned that they deliberately provided the wrong responses to stimulus cards they thought would be too difficult for a brain-injured person. The results of these poststudy interviews are consistent with those reported by Bectar and Williams (1995) and Iverson et al. (1991).

152

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

3. Discussion The sophisticated dissimulators in this study did not display consistent patterns of overfailure on every scoring dimension, although they did on four of the original eight indices. The results of the posttest interviews emphasize the importance of a multidimensional approach at detecting dissimulation, especially with sophisticated malingerers. Using six predictor variables, we were able to discriminate 82% of the mildly to moderately braininjured subjects and dissimulators accurately. The dissimulating group overestimated the level of item difficulty. This performance pattern was expected. The dissimulating group did poorly on tasks that were easy for the mildly to moderately brain-injured group. The dissimulating group also misjudged the level of item difficulty for grouped vs. ungrouped icons. The grouped icon cards should have taken less time to count than the ungrouped icon cards. The mildly to moderately braininjured group was consistent and performed as predicted. These results are consistent with other research studies using this type of detection strategy (Bectar & Williams, 1995; Goebel, 1983). As predicted, scores of the dissimulating subjects on the Forced-Choice variable did not fall below chance. This is consistent with other studies using this type of detection strategy (Bectar & Williams, 1995; Binder & Willis, 1991; Greiffenstein et al., 1994). One of the hypotheses of this present study was that, sincere subjects will decrease the counting time over successive trials and make fewer errors. Dissimulating subjects, on the other hand, would produce the opposite pattern. The results support this hypothesis. The normal (Split, M = 0.92, P < .05) and brain-injured (Split, M = 9.82, P < .05) groups did show a shorter response time over successive trials as compared with the dissimulating group (Split, M = 12.0, P < .05), and they produced more accurate counts than the dissimulating group over the successive trials. The dissimulating group maintained consistency in item response, as did the brain-injury group. The items they chose to recognize correctly were the same items they correctly recalled in the previous section. The dissimulating group could detect and outwit this type of detection strategy. Forty-one percent of the brain-injured subjects were currently working full time at the time of the study. Their current level of functioning may account for the number of false positives in the brain-injured group. Eight were misclassified as normals, thus, reducing the overall classification accuracy rate to only 68%. When the normal group was eliminated from the discriminant function analysis, the classification accuracy rate increased to 82%. The MIND seems sensitive to specific functional deficits following a mild to moderate neurological insult, most specifically, memory impairment and/or attention factors that can affect memory performance. The brain-injured group performed similarly to the normal controls on all measures except for free recall and performance curve measures. The results of the sequencing scoring index, which measures performance curves as the items become more difficult, were unexpected. There were several sequence errors in the time it took to count easy stimulus cards, compared to similar but more difficult ones. The results from previous studies (Goebel, 1983; Greiffenstein et al., 1994) using this type of detection strategy have shown it to be less effective at discriminating seriously braininjured patients from probable malingerers. However, we did not expect this finding to

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

153

occur in our mild to moderate impairment subjects. It was hypothesized that this group would perform similarly to normals. There are several possibilities that may account for the brain-injured group’s poor performance. First, the instrument has no standard protocol on timed responses for individuals who admit getting off-track counting the cards and go on to start over. Second, visual tracking may be more challenging for some difficult cards than others. Third, some subjects may have placed little effort in counting the difficult cards and responded prematurely. Finally, it is not uncommon to see attention and concentration deficits following mild to moderate head trauma. It is possible that this predictor variable is highly dependent upon excellent attention skills. Free recall also seems difficult for even mildly to moderately brain-injured subjects, although their recognition performance was similar to the normal group. This performance pattern suggests retrieval deficits rather than retention impairments. The present study suggests that the MIND is a promising instrument. However, crossvalidation with a larger number of subjects is needed. It also would be interesting to study the manner in which the MIND performances vary as a function of time since injury, severity of injury, and location of injury. Finally, the present study was confined to demonstrating the sensitivity of the MIND’s present scoring indexes using normal college students instructed to deceive. This sample may not be representative of at-risk individuals for head trauma. Therefore, replication with a more representative sample of this population would be useful. The MIND appears to have potential for identifying individuals feigning mild to moderate neuropsychological deficits. A particular advantage of the MIND is its sensitivity to a number of different simulating strategies. Within the context of a thorough neuropsychological evaluation, the MIND may provide additional information regarding a patient’s degree of sincerity.

Appendix A. MIND scoring indices

A.1. Forced-choice This refers to the number of items for which the subject receives credit for correct responding in Parts I, III, IV, and IV. A total of 80 is a perfect score. Each item in each part is worth one point each.

A.2. Grouped vs. Ungrouped This refers to the number of occurrences when the subject has taken longer to count grouped icons than the subject took to count an identical number of ungrouped icons. The grouped icons should take less time to count than the same number of ungrouped or randomly arrayed icons. It is scored using a value of time difference. Paired values below 0.0 receive one point. The maximum score obtainable is eight. High scores indicate the subject is consciously attempting to perform without sincere effort.

154

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

A.3. Split This is the time difference for completing Part I vs. Part IV of the test for all icons. Normals and brain-injured subjects should perform about the same or else show a slight improvement. People who are attempting to perform poorly tend to loose track of their timed responses and thus show inconsistencies.

A.4. Similarity This index scores one point for each occurrence where the time to count a number of randomly organized icons in Part I differed by more than 20% for the same number of randomly organized icons in Part IV.

A.5. Sequence This refers to sequence errors. A sequence error occurs when the time to count a stimulus card with a greater number of randomly organized icons takes less time to count than a similar card with fewer icons. There should be a gradual and sequential increase in response times as the number of icons on each card increases. Discrepancies within this sequence are scored against the person.

A.6. Consistency One point is scored for each time an icon freely recalled in Part II is not correctly recognized in Parts III and VI or not freely recalled in Part V. A point is also scored if an icon is freely recalled in Part V but is not recognized in Part VI. A total of 40 points is possible. High scores suggest poor effort.

A.7. Learn This is a ratio between the number of items correctly recognized in Parts II and V and the number of items correctly recalled. Theoretically, one would expect to see recognition higher than recall, since the former is an easier task.

A.8. Recall This is the number of items successfully recalled in Parts II and V.

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

155

References Bectar, J., & Williams, J. (1995). Malingering response styles on the Memory Assessment Scales and symptom validity tests. Archives of Clinical Neuropsychology, 10, 57 – 72. Bernard, L. C., & Fowler, W. (1990). Assessing the validity of memory complaints: performance of brain damaged and normal individuals on Rey’s task to detect malingering. Journal of Clinical Psychology, 46, 432 – 436. Bernard, L. C., McGrath, M. J., & Houston, W. (1993). Discriminating between simulated malingering and closed head injury on the Wechsler Memory Scale-Revised. Archives of Clinical Neuropsychology, 8, 539 – 551. Binder, L. M. (1993). Assessment of malingering after mild head trauma with the Portland Digit Recognition Test. Journal of Clinical and Experimental Neuropsychology, 15, 170 – 182. Binder, L. M., & Rohling, M. L. (1995). Assessment of functional problems. Paper presented at the National Academy of Neuropsychologists’ Conference, San Francisco, CA. Binder, L. M., & Willis, S. C. (1991). Assessment of motivation after financially compensable minor head trauma. Psychological Assessment, 3, 175 – 181. Frederick, R. I., Sarfaty, S. D., Johnston, D., & Powel, J. (1994). Validation of a detector of response bias on a forced-choice test of nonverbal ability. Neuropsychology, 8, 118 – 125. Goebel, R. A. (1983). Detection of faking on the Halstead – Reitan Neuropsychological Test Battery. Journal of Clinical Psychology, 42, 731 – 742. Greiffenstein, M., Baker, W., & Gola, T. (1994). Validation of malingered amnesia with a large clinical sample. Psychological Assessment, 6, 218 – 224. Gudjonsson, G. H., & Shackleton, H. (1986). The pattern of scores on Raven’s Matrices during ‘faking bad’ and ‘non-faking’ performance. British Journal of Clinical Psychology, 25, 35 – 41. Guilmette, T. J., Hart, K. J., Giuliano, A. J., & Leininger, B. E. (1994). Detecting simulated memory impairment: comparison of the Rey Fifteen-Item Test and the Hiscock Forced-Choice Procedure. The Clinical Neuropsychologist, 8, 283 – 294. Heaton, R. K., Smith, H. H., Lehman, R. A. W., & Vogt, A. T. (1978). Prospects for faking believable deficits on neuropsychological testing. Journal of Consulting and Clinical Psychology, 46, 892 – 900. Iverson, G. L., & Franzen, M. D. (1994). The Recognition Memory Test, Digit Span, and Knox Cube Test as markers of malingered memory impairment. Assessment, 1, 323 – 334. Iverson, G. L., Franzen, M. D., & McCracken, L. M. (1991). Evaluation of an objective assessment technique for the detection of malingered memory deficits. Law and Human Behavior, 15, 667 – 676. Kraus, J., & McArthur, D. (1995). The epidemiology of brain injury. Los Angeles, CA: Injury Prevention Resource Center, University of Los Angeles Department of Epidemiology. Lezak, M. D. (1995). Neuropsychological assessment (3rd ed.). New York: Oxford Univ. Press. Martin, R. C., Bolter, J. F., Todd, M. E., Gouvier, D., & Niccolls, R. (1993). Effects of sophistication and motivation on the detection of malingered memory performance using a computerized forced-choice task. Journal of Clinical and Experimental Neuropsychology, 15, 867 – 880. Mills, S. R., & Putnam, S. H. (1995). Head injury and postconcussive syndrome. New York: Churchill and Livingston. Mittenberg, W., Azrin, R., Millsaps, C., & Heilbronner, R. (1993). Identification of malingered head injury on the Wechsler Memory Scale-Revised. Psychological Assessment, 5, 34 – 40. Niles, K. J., & Sweet, J. J. (1994). Neuropsychological assessment and malingering: a critical review of past and present strategies. Archives of Clinical Neuropsychology, 9, 501 – 552. Rogers, R. (1984). Towards and empirical model of malingering and deception. Behavioral Sciences and the Law, 2, 93 – 111. Schretlen, D. J. (1988). The use of psychological tests to identify malingered symptoms of mental disorder. Clinical Psychology Review, 8, 451 – 476. Slick, D., Hopp, G., Strauss, E., Hunter, M., & Pinch, D. (1994). Detecting dissimulation: profiles of simulated malingerers, traumatic brain-injured patients, and normal controls on a revised version of Hiscock and Hiscock Forced-Choice Memory Test. Journal of Clinical and Experimental Neuropsychology, 16, 472 – 481.

156

L.A. Holmquist, R.L. Wanlass / Archives of Clinical Neuropsychology 17 (2002) 143–156

Tabachnick, B. G., & Fidell, L. S. (1989). Using multivariate statistics (2nd ed.). New York: Harper Collins. Thomas, J., & Wanlass, R. (1994). Development of individual and family assessment instruments for the identification of neuropsychological malingering. Archives of Clinical Neuropsychology, 9, 191. Trueblood, W., & Schmidt, M. (1993). Malingering and other validity considerations in the neuropsychological evaluation of mild head injury. Journal of Clinical and Experimental Neuropsychology, 15, 578 – 590. Wiggins, E. C., & Brandt, J. (1988). The detection of simulated amnesia. Law and Human Behavior, 12, 57 – 78.