Publications by authors named "Bradley N Axelrod"

67 Publications

Differentiating poor validity from probable impairment on the medical symptom validity test: a cross-validation study.

Int J Neurosci 2019 Mar 28;129(3):217-224. Epub 2018 Nov 28.

e Henry Ford Allegiance , Jackson , MI , USA.

Aims: In neuropsychological evaluations, it is often difficult to ascertain whether poor performance on measures of validity is due to poor effort or malingering, or whether there is genuine cognitive impairment. Dunham and Denney created an algorithm to assess this question using the Medical Symptom Validity Test (MSVT). We assessed the ability of their algorithm to detect poor validity versus probable impairment, and concordance of failure on the MSVT with other freestanding tests of performance validity.

Methods: Two previously published datasets (n = 153 and n = 641, respectively) from outpatient neuropsychological evaluations were used to test Dunham and Denney's algorithm, and to assess concordance of failure rates with the Test of Memory Malingering and the forced choice measure of the California Verbal Learning Test, two commonly used performance validity tests.

Results: In both datasets, none of the four cutoff scores for failure on the MSVT (70%, 75%, 80%, or 85%) identified a poor validity group with proportionally aligned failure rates on other freestanding measures of performance validity. Additionally, the protocols with probable impairment did not differ from those with poor validity on cognitive measures.

Conclusions: Despite what appeared to be a promising approach to evaluating failure on the easy MSVT subtests when clinical data are unavailable (as recommended in the advanced interpretation program, or advanced interpretation [AI], of the MSVT), the current findings indicate the AI remains the gold standard for doing so. Future research should build on this effort to address shortcomings in measures of effort in neuropsychological evaluations.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/00207454.2018.1526800DOI Listing
March 2019

Use of Latent Class Analysis to define groups based on validity, cognition, and emotional functioning.

Clin Neuropsychol 2017 Aug - Oct;31(6-7):1087-1099. Epub 2017 Jun 15.

b John D. Dingell VA Medical Center , Detroit , MI , USA.

Objective: Latent Class Analysis (LCA) was used to classify a heterogeneous sample of neuropsychology data. In particular, we used measures of performance validity, symptom validity, cognition, and emotional functioning to assess and describe latent groups of functioning in these areas.

Method: A data-set of 680 neuropsychological evaluation protocols was analyzed using a LCA. Data were collected from evaluations performed for clinical purposes at an urban medical center.

Results: A four-class model emerged as the best fitting model of latent classes. The resulting classes were distinct based on measures of performance validity and symptom validity. Class A performed poorly on both performance and symptom validity measures. Class B had intact performance validity and heightened symptom reporting. The remaining two Classes performed adequately on both performance and symptom validity measures, differing only in cognitive and emotional functioning. In general, performance invalidity was associated with worse cognitive performance, while symptom invalidity was associated with elevated emotional distress.

Conclusions: LCA appears useful in identifying groups within a heterogeneous sample with distinct performance patterns. Further, the orthogonal nature of performance and symptom validities is supported.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2017.1341550DOI Listing
February 2018

Clinimetric validity of the Trail Making Test Czech version in Parkinson's disease and normative data for older adults.

Clin Neuropsychol 2017 Jan-Dec;31(sup1):42-60. Epub 2017 May 23.

a National Institute of Mental Health , Klecany , Czech Republic.

Objective: The influence of demographic variables on the Trail Making Test (TMT) performance in older individuals and empirical findings on clinical validity in predementia states, such as Parkinson's disease mild cognitive impairment (PD-MCI), are limited. The principal aim of this study was to add normative data for the Czech population of older adults and explore the clinimetric properties between PD-MCI and PD patients with normal cognition (PD-NC).

Method: The study included 125 PD patients classified as 77 PD-MCI and 48 PD-NC and 528 older individuals (60-74 years, further subdivided for normative tables into 60-64, 65-69 and 70-74 age groups) and very old individuals (aged 75-96, further subdivided into 75-79, 80-84, 85-96) cognitively intact Czech adults.

Results: Mostly age, to a lesser extent education but not gender, was associated with most TMT basic and derived indices (TMT-B - A). However, the ratio of TMT-B/TMT-A was independent of both age and education. We provide corresponding T-scores that minimize the effect of demographic variables. The results showed a high discriminative validity of TMT basic and derived indices for the differentiation of PD-MCI from PD-NC (all p < .05). The classification accuracy for the differentiation of PD-MCI from controls was optimal for the TMT-B only (80% area under the curve) based on norm adjusted scores. The classification accuracy of the TMT for PD-MCI vs. PD-NC was suboptimal.

Conclusions: The cut-offs and normative standards are useful in clinical practice for those working with PD patients and very old adults.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2017.1324045DOI Listing
March 2019

Strategies of successful and unsuccessful simulators coached to feign traumatic brain injury.

Clin Neuropsychol 2017 04 13;31(3):644-653. Epub 2017 Jan 13.

a Department of Psychology , Wayne State University , Detroit MI , USA.

Objective: The present study evaluated strategies used by healthy adults coached to simulate traumatic brain injury (TBI) during neuropsychological evaluation.

Method: Healthy adults (n = 58) were coached to simulate TBI while completing a test battery consisting of multiple performance validity tests (PVTs), neuropsychological tests, a self-report scale of functional independence, and a debriefing survey about strategies used to feign TBI.

Results: "Successful" simulators (n = 16) were classified as participants who failed 0 or 1 PVT and also scored as impaired on one or more neuropsychological index. "Unsuccessful" simulators (n = 42) failed ≥2 PVTs or passed PVTs but did not score impaired on any neuropsychological index. Compared to unsuccessful simulators, successful simulators had significantly more years of education, higher estimated IQ, and were more likely to use information provided about TBI to employ a systematic pattern of performance that targeted specific tests rather than performing poorly across the entire test battery.

Conclusion: Results contribute to a limited body of research investigating strategies utilized by individuals instructed to feign neurocognitive impairment. Findings signal the importance of developing additional embedded PVTs within standard cognitive tests to assess performance validity throughout a neuropsychological assessment. Future research should consider specifically targeting embedded measures in visual tests sensitive to slowed responding (e.g. response time).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2016.1278040DOI Listing
April 2017

Neuropsychological test validity in Veterans presenting with subjective complaints of 'very severe' cognitive symptoms following mild traumatic brain injury.

Brain Inj 2017 7;31(1):32-38. Epub 2016 Nov 7.

a Department of Mental Health Services , VA Ann Arbor Healthcare System , Ann Arbor , MI , USA.

Objective: This study explored the utility of combining data from measures of performance validity and symptom validity among Veterans undergoing neuropsychological evaluation for mild traumatic brain injury (mTBI).

Background: Persistent cognitive impairments following mTBI are often reported by returning combat veterans. However, objectively-measured cognitive deficits are not common among individuals with mTBI, raising the question of whether negative impression management influences self-ratings.

Methods: Self-report ratings were obtained for memory, concentration, decision-making, and processing speed/organization using a 5-point scale ranging from 'none' to 'very severe'. Veterans also completed brief neuropsychological testing which included measures of performance validity.

Results: Study 1 examined data from 122 participants and demonstrated that veterans reporting a 'very severe' cognitive deficit were over three times as likely to demonstrate poor effort on a validity test than those without a very severe rating. Study 2 replicated these findings in an independent sample of 127 veterans and also demonstrated that both severity of self-report ratings and performance on an embedded measure of effort were predictive of poor effort on a stand-alone performance validity test.

Conclusion: Veterans with suspected mTBI who report 'very severe' cognitive impairment have a greater likelihood of putting forth sub-optimal effort on objective testing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/02699052.2016.1218546DOI Listing
January 2018

The Mild Brain Injury Atypical Symptoms (mBIAS) scale in a mixed clinical sample.

J Clin Exp Neuropsychol 2016 Sep 9;38(7):721-9. Epub 2016 May 9.

a Defense and Veterans Brain Injury Center , Silver Spring , MD , USA.

Introduction: The Mild Brain Injury Atypical Symptoms (mBIAS) scale was developed as a symptom validity test (SVT) for use with patients following mild traumatic brain injury. This study was the first to examine the clinical utility of the mBIAS in a mixed clinical sample presenting to a Department of Veterans Affairs (VA) neuropsychology clinic.

Method: Participants were 117 patients with mixed etiologies (85.5% male; age: M = 39.2 years, SD = 11.6) from a VA neuropsychology clinic. Participants were divided into pass/fail groups using two different SVT criteria, based on select validity scales from the Minnesota Multiphasic Personality Inventory-2 (MMPI-2): first, Infrequency Scale (F) scores: (a) MMPI-F-Fail (n = 21) and (b) MMPI-F-Pass (n = 96); and, second, Symptom Validity Scale (FBS) scores: (a) MMPI-FBS-Fail (n = 36) and (b) MMPI-FBS-Pass (n = 81).

Results: The mBIAS demonstrated good internal consistency, and each item contributed meaningfully to the total score. At a symptom exaggeration base rate of 35%, an mBIAS cutoff of ≥11 was optimal for screening symptom exaggeration when groups were classified using both F and FBS scales. This cutoff score resulted in very high specificity (.89 to .94); moderate-high positive predictive power (.71 to .75) and negative predictive power (.72 to .79); and low-moderate sensitivity (.31 to .57). At all base rates of probable somatic exaggeration, a cutoff of ≥16 resulted in perfect specificity and positive predictive power, but very low sensitivity.

Conclusions: The mBIAS has potential for use in samples outside of mild traumatic brain injury. In settings where the symptom exaggeration base rate is 35%, a cutoff of ≥11 may be used as a "red flag" for further evaluation, but should not be relied on for clinical decision making. At all base rates of probable somatic exaggeration, psychologists with patients who score ≥16 can be confident that those patients were exaggerating. Importantly, however, this cutoff may fail to identify a large proportion of patients who are exaggerating.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13803395.2016.1161732DOI Listing
September 2016

Evaluating the Medical Symptom Validity Test (MSVT) in a Sample of Veterans Between the Ages of 18 to 64.

Appl Neuropsychol Adult 2017 Mar-Apr;24(2):132-139. Epub 2016 Apr 4.

b John D. Dingell Department of Veterans Affairs Medical Center , Detroit , Michigan , USA.

The purpose of the current study was to compare three potential profiles of the Medical Symptom Validity Test (MSVT; Pass, Genuine Memory Impairment Profile [GMIP], and Fail) on other freestanding and embedded performance validity tests (PVTs). Notably, a quantitatively computed version of the GMIP was utilized in this investigation. Data obtained from veterans referred for a neuropsychological evaluation in a metropolitan Veteran Affairs medical center were included (N = 494). Individuals age 65 and older were not included to exclude individuals with dementia from this investigation. The sample revealed 222 (45%) in the Pass group. Of the 272 who failed the easy subtests of the MSVT, 221 (81%) met quantitative criteria for the GMIP and 51 (19%) were classified as Fail. The Pass group failed fewer freestanding and embedded PVTs and obtained higher raw scores on all PVTs than both GMIP and Fail groups. The differences in performances of the GMIP and Fail groups were minimal. Specifically, GMIP protocols failed fewer freestanding PVTs than the Fail group; failure on embedded PVTs did not differ between GMIP and Fail. The MSVT GMIP incorporates the presence of clinical correlates of disability to assist with this distinction, but future research should consider performances on other freestanding measures of performance validity to differentiate cognitive impairment from invalidity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/23279095.2015.1107565DOI Listing
February 2017

Embedded Measures of Performance Validity in the Rey Complex Figure Test in a Clinical Sample of Veterans.

Appl Neuropsychol Adult 2016 18;23(2):105-14. Epub 2015 Sep 18.

f Department of Psychology, Henry Ford Health System , Detroit , Michigan.

The purpose of this study was to determine how well scores from the Rey Complex Figure Test (RCFT) could serve as embedded measures of performance validity in a large, heterogeneous clinical sample at an urban-based Veterans' Affairs hospital. Participants were divided into credible performance (n = 244) and noncredible performance (n = 87) groups based on common performance validity tests during their respective clinical evaluations. We evaluated how well preselected RCFT scores could discriminate between the 2 groups using cut scores from single indexes as well as multivariate logistic regression prediction models. Additionally, we evaluated how well memory error patterns (MEPs) could discriminate between the 2 groups. Optimal discrimination occurred when indexes from the Copy and Recognition trials were simultaneous predictors in logistic regression models, with 91% specificity and at least 53% sensitivity. Logistic regression yielded superior discrimination compared with individual indexes and compared with the use of MEPs. Specific scores on the RCFT, including the Copy and Recognition trials, can serve as adequate indexes of performance validity, when using both cut scores and logistic regression prediction models. We provide logistic regression equations that can be applied in similar clinical settings to assist in determining performance validity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/23279095.2015.1014557DOI Listing
November 2016

The Prague Stroop Test: Normative standards in older Czech adults and discriminative validity for mild cognitive impairment in Parkinson's disease.

J Clin Exp Neuropsychol 2015 ;37(8):794-807

a Department of Neurology and Centre of Clinical Neuroscience , First Faculty of Medicine and General University Hospital in Prague, Charles University in Prague , Prague , Czech Republic.

Objective: The aim of this study was to provide normative data for older and very old Czech adults on the Prague Stroop Test (PST) and to test its discriminative validity in individuals with Parkinson's disease mild cognitive impairment (PD-MCI).

Method: The construction of the PST was modeled after the Victoria Stroop Test. We examined 539 participants aged 60-96 that met strict inclusion criteria. After, we compared the PST scores for a group of 45 PD-MCI patients with a healthy adult sample (HAS) of 45 age- and education-matched individuals.

Results: I. In the non-clinical sample, robust age- and education-related influences were observed on all PST scores. No gender effect was noted. II. For clinical cases, interference condition (PST-C) was able to discriminate between PD-MCI and HAS (all scores ps < .01). Area under the curve (AUC) was 77% when a screening cut-off of ≤ 27 s was used, showing sensitivity of 82% and specificity of 53%. A more conservative diagnostic cut-off of ≤ 33 s showed sensitivity of 60% and specificity of 80%.

Discussion: The present study provides PST normative data for basic, interference, and error scores stratified by age (60-96 years). PST appears to be a helpful tool for the diagnostics of PD-MCI especially in research settings at Level II (Litvan et al., 2012) and for PD-MCI attention/working memory and executive function subtyping.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13803395.2015.1057106DOI Listing
May 2016

Is Co-norming Required?

Arch Clin Neuropsychol 2015 Nov 6;30(7):611-33. Epub 2015 Jul 6.

Department of Psychology, Brigham Young University in Hawaii, Laie, HI, USA.

Researchers who have been responsible for developing test batteries have argued that competent practice requires the use of a "fixed battery" that is co-normed. We tested this assumption with three normative systems: co-normed, meta-regressed norms and a system of these two methods. We analyzed two samples: 330 referred patients and 99 undergraduate volunteers. The T scores generated for referred patients using the three systems were highly associated with one another and quite similar in magnitude, with an Overall Test Battery Means (OTBMs) using the co-normed, hybrid, and meta-regressed scores equaled 43.8, 45.0, and 43.9, respectively. For volunteers, the OTBMs equaled 47.4, 47.5, and 47.1, respectively. The correlations amongst these OTBMs across systems were all above .90. Differences among OTBMs across normative systems were small and not clinically meaningful. We conclude that co-norming for competent clinical practice is not necessary.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/arclin/acv039DOI Listing
November 2015

Wechsler Adult Intelligence Scale-IV Dyads for Estimating Global Intelligence.

Assessment 2015 Aug 29;22(4):441-8. Epub 2014 Sep 29.

University of Aberdeen, Aberdeen, UK.

All possible two-subtest combinations of the core Wechsler Adult Intelligence Scale-IV (WAIS-IV) subtests were evaluated as possible viable short forms for estimating full-scale IQ (FSIQ). Validity of the dyads was evaluated relative to FSIQ in a large clinical sample (N = 482) referred for neuropsychological assessment. Sample validity measures included correlations, mean discrepancies, and levels of agreement between dyad estimates and FSIQ scores. In addition, reliability and validity coefficients were derived from WAIS-IV standardization data. The Coding + Information dyad had the strongest combination of reliability and validity data. However, several other dyads yielded comparable psychometric performance, albeit with some variability in their particular strengths. We also observed heterogeneity between validity coefficients from the clinical and standardization-based estimates for several dyads. Thus, readers are encouraged to also consider the individual psychometric attributes, their clinical or research goals, and client or sample characteristics when selecting among the dyadic short forms.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/1073191114551551DOI Listing
August 2015

Utility of the Montreal Cognitive Assessment and Mini-Mental State Examination in predicting general intellectual abilities.

Cogn Behav Neurol 2014 Sep;27(3):148-54

*John D. Dingell Department of Veterans Affairs Medical Center, Detroit, Michigan †Wayne State University, Department of Psychology, Detroit, Michigan.

Objective: To determine whether scores from 2 commonly used cognitive screening tests can help predict general intellectual functioning in older adults.

Background: Cutoff scores for determining cognitive impairment have been validated for both the Montreal Cognitive Assessment (MoCA) and the Mini-Mental State Examination (MMSE). However, less is known about how the 2 measures relate to general intellectual functioning as measured by the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV).

Methods: A sample of 186 older adults referred for neuropsychological assessment completed the MoCA, MMSE, and WAIS-IV. Regression equations determined how accurately the screening measures could predict the WAIS-IV Full Scale Intelligence Quotient (FSIQ). We also determined how predictive the MoCA and MMSE were when combined with 2 premorbid estimates of FSIQ: the Test of Premorbid Functioning (TOPF) (a reading test of phonetically irregular words) and a predicted TOPF score based on demographic variables.

Results: MoCA and MMSE both correlated moderately with WAIS-IV FSIQ. Hierarchical regression models containing the MoCA or MMSE combined with TOPF scores accounted for 58% and 49%, respectively, of the variance in obtained FSIQ. Both regression equations accurately estimated FSIQ to within 10 points in >75% of the sample.

Conclusions: Both the MoCA and MMSE provide reasonable estimates of FSIQ. Prediction improves when these measures are combined with other estimates of FSIQ. We provide 4 equations designed to help clinicians interpret these screening measures.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/WNN.0000000000000035DOI Listing
September 2014

Embedded measures of performance validity using verbal fluency tests in a clinical sample.

Appl Neuropsychol Adult 2015 25;22(2):141-6. Epub 2014 Aug 25.

a Department of Psychology , John D. Dingell Veterans' Affairs Medical Hospital.

The objective of this study was to determine to what extent verbal fluency measures can be used as performance validity indicators during neuropsychological evaluation. Participants were clinically referred for neuropsychological evaluation in an urban-based Veteran's Affairs hospital. Participants were placed into 2 groups based on their objectively evaluated effort on performance validity tests (PVTs). Individuals who exhibited credible performance (n = 431) failed 0 PVTs, and those with poor effort (n = 192) failed 2 or more PVTs. All participants completed the Controlled Oral Word Association Test (COWAT) and Animals verbal fluency measures. We evaluated how well verbal fluency scores could discriminate between the 2 groups. Raw scores and T scores for Animals discriminated between the credible performance and poor-effort groups with 90% specificity and greater than 40% sensitivity. COWAT scores had lower sensitivity for detecting poor effort. A combination of FAS and Animals scores into logistic regression models yielded acceptable group classification, with 90% specificity and greater than 44% sensitivity. Verbal fluency measures can yield adequate detection of poor effort during neuropsychological evaluation. We provide suggested cut points and logistic regression models for predicting the probability of poor effort in our clinical setting and offer suggested cutoff scores to optimize sensitivity and specificity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/23279095.2013.873439DOI Listing
December 2015

Comparisons of five performance validity indices in bona fide and simulated traumatic brain injury.

Clin Neuropsychol 2014 1;28(5):851-75. Epub 2014 Jul 1.

a Department of Psychology , Wayne State University , Detroit , MI 48202 , USA.

A number of performance validity tests (PVTs) are used to assess memory complaints associated with traumatic brain injury (TBI); however, few studies examine the concordance and predictive accuracy of multiple PVTs, specifically in the context of combined models in known-group designs. The present study compared five widely used PVTs: the Test of Memory Malingering (TOMM), Medical Symptom Validity Test (MSVT), Reliable Digit Span (RDS), Word Choice Test (WCT), and California Verbal Learning Test - Forced Choice (CVLT-FC). Participants were 51 adults with bona fide moderate to severe TBI and 58 demographically comparable healthy adults coached to simulate memory impairment. Classification accuracy of individual PVTs was evaluated using logistic regression and receiver operating characteristic (ROC) curves, examining both the dichotomous cutting scores as recommended by the test publishers and continuous scores for the measures. Results demonstrated nearly equivalent discrimination ability of the TOMM, MSVT, and CVLT-FC as individual predictors, all of which markedly outperformed the WCT and RDS. Models of combined PVTs were examined using Bayesian information criterion statistics, with results demonstrating that diagnostic accuracy showed only small to modest growth when the number of tests was increased beyond two. Considering the clinical and pragmatic issues in deriving a parsimonious assessment battery, these findings suggest that using the TOMM and CVLT in conjunction or the MSVT and CVLT in conjunction maximized predictive accuracy as compared to a single index or an assortment of these widely used measures.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2014.927927DOI Listing
October 2014

Finger Tapping Test performance as a measure of performance validity.

Clin Neuropsychol 2014 16;28(5):876-88. Epub 2014 Apr 16.

a Psychology Section , John D. Dingell DVAMC , Detroit , MI , USA.

The Finger Tapping Test (FTT) has been presented as an embedded measure of performance validity in most standard neuropsychological evaluations. The present study evaluated the utility of three different scoring systems intended to detect invalid performance based on FTT. The scoring systems were evaluated in neuropsychology cases from clinical and independent practices, in which credible performance was determined based on passing all performance validity measures or failing two or more validity indices. Each FTT scoring method presented with specificity rates at approximately 90% and sensitivity of slightly more than 40%. When suboptimal performance was based on the failure of any of the three scoring methods, specificity was unchanged and sensitivity improved to 50%. The results are discussed in terms of the utility of combining multiple scoring measures for the same test as well as benefits of embedded measures administered over the duration of the evaluation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2014.907583DOI Listing
October 2014

Czech version of Rey Auditory Verbal Learning test: normative data.

Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 2014 17;21(6):693-721. Epub 2013 Dec 17.

a Department of Neurology and Centre of Clinical Neuroscience , Charles University in Prague , Prague , Czech Republic.

The present study provides normative data stratified by age for the Rey Auditory Verbal Learning test Czech version (RAVLT) derived from a sample of 306 cognitively normal subjects (20-85 years). Participants met strict inclusion criteria (absence of any active or past neurological or psychiatric disorder) and performed within normal limits on other neuropsychological measures. Our analyses revealed significant relationships between most RAVLT indices and age and education. Normative data are provided not only for basic RAVLT scores, but for the first time also for a variety of derived (gained/lost access, primacy/recency effect) and error scores. The study confirmed a logarithmic character of the learning slope and is consistent with other studies. It enables the clinician to evaluate more precisely subject's RAVLT memory performance on a vast number of indices and can be viewed as a concrete example of Quantified Process Approach to neuropsychological assessment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13825585.2013.865699DOI Listing
May 2015

WAIS-IV reliable digit span is no more accurate than age corrected scaled score as an indicator of invalid performance in a veteran sample undergoing evaluation for mTBI.

Clin Neuropsychol 2013 7;27(8):1362-72. Epub 2013 Oct 7.

a Department of Mental Health Services , VA Ann Arbor Healthcare System , MI , USA .

Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2013.845248DOI Listing
April 2014

Assessing effort: differentiating performance and symptom validity.

Clin Neuropsychol 2013 12;27(8):1234-46. Epub 2013 Sep 12.

a Neuropsychology Department , Rehabilitation Institute of Michigan , Detroit , MI , USA .

The current study aimed to clarify the relationship among the constructs involved in neuropsychological assessment, including cognitive performance, symptom self-report, performance validity, and symptom validity. Participants consisted of 120 consecutively evaluated individuals from a veteran's hospital with mixed referral sources. Measures included the Wechsler Adult Intelligence Scale-Fourth Edition Full Scale IQ (WAIS-IV FSIQ), California Verbal Learning Test-Second Edition (CVLT-II), Trail Making Test Part B (TMT-B), Test of Memory Malingering (TOMM), Medical Symptom Validity Test (MSVT), WAIS-IV Reliable Digit Span (RDS), Post-traumatic Check List-Military Version (PCL-M), MMPI-2 F scale, MMPI-2 Symptom Validity Scale (FBS), MMPI-2 Response Bias Scale (RBS), and the Postconcussive Symptom Questionnaire (PCSQ). Six different models were tested using confirmatory factor analysis (CFA) to determine the factor model describing the relationships between cognitive performance, symptom self-report, performance validity, and symptom validity. The strongest and most parsimonious model was a three-factor model in which cognitive performance, performance validity, and self-reported symptoms (including both standard and symptom validity measures) were separate factors. The findings suggest failure in one validity domain does not necessarily invalidate the other domain. Thus, performance validity and symptom validity should be evaluated separately.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2013.835447DOI Listing
April 2014

Number of impaired scores as a performance validity indicator.

J Clin Exp Neuropsychol 2013 20;35(4):413-20. Epub 2013 Mar 20.

Division of Physical Medicine and Rehabilitation, University of Utah School of Medicine, Salt Lake City, UT 84132, USA.

This study examined embedded performance validity indicators (PVI) based on the number of impaired scores in an evaluation and the overall test battery mean (OTBM). Adult participants (N = 175) reporting traumatic brain injury were grouped using eight PVI. Participants who passed all PVI (n = 67) demonstrated fewer impaired scores and higher OTBM than those who failed two or more PVI (n = 66). Impairment was defined at three levels: T scores < 40, 35, and 30. With specificity ≥.90, sensitivity ranged from .51 to .71 for number of impaired scores and .74 for OTBM.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13803395.2013.781134DOI Listing
January 2014

Do administration instructions alter optimal neuropsychological test performance? Data from healthy volunteers.

Appl Neuropsychol Adult 2013 20;20(1):15-9. Epub 2012 Sep 20.

Department of Psychology, Texas State University, San Marcos, Texas 78666, USA.

The degree to which patients should be prompted to give their best effort has not been adequately addressed in the literature, nor has the issue of the extent to which they should be informed that measures of effort will be included in the assessment battery. Three groups of undergraduates were given three different instructional sets prior to completing a neuropsychological evaluation. The instructions provided different levels of motivation to perform optimally as well as possible warning regarding the detection of poor effort. The three groups did not differ in performance on any of the cognitive measures, although outlier performance resulted in lower mean performance on the Finger Tapping Test by the most clearly warned group. The results are discussed in terms of the potential of different instructional sets to affect motivation for optimal test performance.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/09084282.2012.670152DOI Listing
August 2013

Derivation of an embedded Rey Auditory Verbal Learning Test performance validity indicator.

Clin Neuropsychol 2012 18;26(8):1397-408. Epub 2012 Oct 18.

University of Utah School of Medicine, Salt Lake City, UT, USA.

This study derived an embedded performance validity indicator for the Rey Auditory Verbal Learning Test (AVLT) using an archival dataset. Participants aged 20 to 65 (N = 167) who reported traumatic brain injury and completed at least two performance validity tests were included. The group who passed all performance validity measures (n = 68) demonstrated higher scores on all AVLT trials than the group who failed two or more validity indicators (n = 62). Bayesian model averaging was used to identify the optimal combination of AVLT variables for group discrimination; Total Learning and Recognition raw scores were selected. Logistic regression using these variables showed excellent discrimination with an area under the curve of. 85. The resulting AVLT performance validity index demonstrated sensitivity of. 55 with specificity of. 91. Further study of this index is warranted and cross-validation is recommended prior to clinical use.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2012.728627DOI Listing
May 2013

Czech version of the Trail Making Test: normative data and clinical utility.

Arch Clin Neuropsychol 2012 Dec 30;27(8):906-14. Epub 2012 Sep 30.

Department of Neurology and Centre of Clinical Neuroscience, 1st Faculty of Medicine and General University Hospital in Prague, Charles University in Prague, Prague, Czech Republic.

The Trail Making Test (TMT) comprises two psychomotor tasks that measure a wide range of visual-perceptual and executive functions. The purpose of this study was to provide Czech normative data and to examine the relationship between derived TMT indices and demographic variables. The TMT was administered to 421 healthy adults. Two clinical groups (n = 126) were evaluated to investigate the clinical utility of the TMT-derived scores: amnestic mild cognitive impairment (n = 90) and Alzheimer's disease (n = 36). Statistical analyses showed that age and education, but not gender, were significantly associated with TMT completion times and derived scores. Of all the indices, only the TMT ratio score was insensitive to age. We present normative values for the Czech version of the TMT, providing a reference for measuring individual performance in native Czech speakers. Moreover, we found that accuracy on the TMT was improved with the attenuation of age.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/arclin/acs084DOI Listing
December 2012

Performance validity and neuropsychological outcomes in litigants and disability claimants.

Clin Neuropsychol 2012 25;26(5):850-65. Epub 2012 May 25.

Rehabilitation Institute of Michigan, Detroit, USA.

This study examined the relationship of performance validity and neuropsychological outcomes in a sample of individuals referred for independent neuropsychological examination in the context of reported traumatic brain injury (82% mild). Archival data were examined on 175 participants aged 20 to 65 who were administered at least two performance validity measures. Participants who passed all effort measures (Pass; n = 61) outperformed those who failed two or more (Fail; n = 70) on the majority of tests in the neuropsychological battery. The Fail group showed a higher percentage of impaired test scores than the Pass group with impairment defined at three levels (T scores < 40, 35, and 30). At the most conservative impairment cutoff (T < 30), 16% of the Pass group demonstrated impaired scores on more than three measures, while 79% of the Fail group showed impaired scores on more than three measures. The number of effort measures failed correlated highly with the overall test battery mean (r = -.73). On cognitive domain summary scores, effect sizes based on levels of effort (d = 1.12 to 1.86) were higher than those based on injury severity (d = 0.03 to 0.36).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2012.686631DOI Listing
November 2012

Associations between markers of colorectal cancer stem cells and adenomas among ethnic groups.

Dig Dis Sci 2012 Sep 6;57(9):2334-9. Epub 2012 May 6.

John D. Dingell Veterans Affairs Medical Center, Karmanos Cancer Institute, Wayne State University School of Medicine, 4646 John R; Room: B-4238, Detroit, MI 48201, USA.

Background And Purposes: Most colorectal tumors develop from adenomatous polyps, which are detected by colonoscopy. African Americans (AAs) have higher incidence of colorectal cancer (CRC) and greater mortality from this disease than Caucasian Americans (CAs). We investigated whether differences in predisposition to CRC and its surrogate (colonic adenomas) between these ethnic groups were related to numbers of cancer stem or stem-like cells (CSCs) in colonocytes.

Methods: We analyzed colonic effluent from 11 AA and 14 CA patients who underwent scheduled colonoscopy examinations at the John D. Dingell Veterans Affairs Medical Center. We determined proportions of cells that expressed the CSC markers CD44 and CD166 by flow cytometry.

Results: The proportion of colonocytes that were CD44(+)CD166(-) in effluent from patients with adenomas was significantly greater than from patients without adenomas (P = 0.01); the proportion of CD44(+)CD166(+) colonocytes was also greater (P = 0.07). Effluent from AAs with adenomas had 60 % more CD44(+)166(-) colonocytes than from CAs with adenomas. Using cutoff values of 8 % for AAs and 3 % for CAs, the proportion of CD44(+)166(-) colonocytes that had positive predictive value for detection of adenomas was 100 % for AAs and CAs, determined by receiver operator characteristic curve analysis.

Conclusion: The proportion of CD44(+)166(-) colonocytes in colonic effluent can be used to identify patients with adenoma. AAs with adenomas have a higher proportion of CD44(+)166(-) colonocytes than CA. The increased proportion of CSCs in colonic tissue from AA might be associated with the increased incidence of CRC in this population.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10620-012-2195-3DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3816978PMC
September 2012

Substitution of California Verbal Learning Test, second edition for Verbal Paired Associates on the Wechsler Memory Scale, fourth edition.

Clin Neuropsychol 2012 30;26(4):599-608. Epub 2012 Mar 30.

Department of Psychology, Wayne State University, Detroit, MI, USA.

Two common measures used to evaluate verbal learning and memory are the Verbal Paired Associates (VPA) subtest from the Wechsler Memory Scales (WMS) and the second edition of the California Verbal Learning Test (CVLT-II). For the fourth edition of the WMS, scores from the CVLT-II can be substituted for VPA; the present study sought to examine the validity of the substitution. For each substitution, paired-samples t tests were conducted between original VPA scaled scores and scaled scores obtained from the CVLT-II substitution to evaluate comparability. Similar comparisons were made at the index score level. At the index score level, substitution resulted in significantly lower scores for the AMI (p = .03; r = .13) but not for the IMI (p = .29) or DMI (p = .09). For the subtest scores, substituted scaled scores for VPA were not significantly different from original scores for the immediate recall condition (p = .20) but were significantly lower at delayed recall (p = .01). These findings offer partial support for the substitution. For both the immediate and delayed conditions, the substitution produced generally lower subtest scores compared to original VPA subtest scores.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2012.677478DOI Listing
September 2012

Parsimonious prediction of Wechsler Memory Scale, Fourth Edition scores: immediate and delayed memory indexes.

J Clin Exp Neuropsychol 2012 2;34(5):531-42. Epub 2012 Mar 2.

Department of Psychology, Wayne State University, Detroit, MI, USA.

Research on previous versions of the Wechsler Memory Scale (WMS) found that index scores could be predicted using a parsimonious selection of subtests (e.g., Axelrod & Woodard, 2000). The release of the Fourth Edition (WMS-IV) requires a reassessment of these predictive formulas as well as the use of indices from the California Verbal Learning Test-II (CVLT-II). Complete WMS-IV and CVLT-II data were obtained from 295 individuals. Six regression models were fit using WMS-IV subtest scaled scores-Logical Memory (LM), Visual Reproduction (VR), and Verbal Paired Associates (VPA)-and CVLT-II substituted scores to predict Immediate Memory Index (IMI) and Delayed Memory Index (DMI) scores. All three predictions of IMI significantly correlated with the complete IMI (r = .92 to .97). Likewise, predicted DMI scores significantly correlated with complete DMI (r = .92 to .97). Statistical preference was indicated for the models using LM, VR, and VPA, in which 97% and 96% of the cases fell within two standard errors of measurement (SEMs) of full index scores, respectively. The present findings demonstrate that the IMI and DMI can be reliably estimated using two or three subtests from the WMS-IV, with preference for using three. In addition, evidence suggests little to no improvement in predictive accuracy with the inclusion of CVLT-II indices.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13803395.2012.665437DOI Listing
August 2012

Parsimonious estimation of the Wechsler Memory Scale, Fourth Edition demographically adjusted index scores: immediate and delayed memory.

Clin Neuropsychol 2012 1;26(3):490-500. Epub 2012 Mar 1.

Department of Psychology, Wayne State University, Detroit, MI, USA.

The recent release of the Wechsler Memory Scale Fourth Edition contains many improvements from a theoretical and administration perspective, including demographic corrections using the Advanced Clinical Solutions. Although the administration time has been reduced from previous versions, a shortened version may be desirable in certain situations given practical time limitations in clinical practice. The current study evaluated two- and three-subtest estimations of demographically corrected Immediate and Delayed Memory index scores using both simple arithmetic prorating and regression models. All estimated values were significantly associated with observed index scores. Use of Lin's Concordance Correlation Coefficient as a measure of agreement showed a high degree of precision and virtually zero bias in the models, although the regression models showed a stronger association than prorated models. Regression-based models proved to be more accurate than prorated estimates with less dispersion around observed values, particularly when using three subtest regression models. Overall, the present research shows strong support for estimating demographically corrected index scores on the WMS-IV in clinical practice with an adequate performance using arithmetically prorated models and a stronger performance using regression models to predict index scores.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2012.665084DOI Listing
August 2012

Determining an appropriate cutting score for indication of impairment on the Montreal Cognitive Assessment.

Int J Geriatr Psychiatry 2012 Nov 9;27(11):1189-94. Epub 2012 Jan 9.

John D. Dingell Department of Veterans Affairs Medical Center, Detroit, MI, USA.

Objective/methods: The Montreal Cognitive Assessment (MoCA) is a brief yet comprehensive cognitive instrument used to assess level of impairment in neurological populations. The purpose of the present study was to assess the ability of the MoCA to detect cognitive impairment in a veteran patient population referred for neuropsychological testing and to determine optimal cutoff scores on the MoCA when compared with widely used neuropsychological measures.

Results: Using receiver operator characteristic (ROC) analyses, the findings indicate that the optimal cutoff score to detect impairment (i.e., ≤ 20) in the present sample was notably lower than that suggested by others.

Conclusions: Use of the previously suggested cut score of <26 may overpathologize neurologically intact individuals. Further research utilizing ROC curve analysis should be conducted to establish appropriate cutoff scores for various populations which may differ from the present sample.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/gps.3768DOI Listing
November 2012

Cross-validation of picture completion effort indices in personal injury litigants and disability claimants.

Arch Clin Neuropsychol 2011 Dec 10;26(8):768-73. Epub 2011 Oct 10.

Department of Rehabilitation Psychology and Neuropsychology, Rehabilitation Institute of Michigan, Detroit, 48201, USA.

Picture Completion (PC) indices from the Wechsler Adult Intelligence Scale, Third Edition, were investigated as performance validity indicators (PVIs) in a sample referred for independent neuropsychological examination. Participants from an archival database were included in the study if they were between the ages of 18 and 65 and were administered at least two PVIs. Effort measure performance yielded groups that passed all or failed one measure (Pass; n= 95) and failed two or more PVIs (Fail-2; n= 61). The Pass group performed better on PC than the Fail-2 group. PC cut scores were compared in differentiating Pass and Fail-2 groups. PC raw score of ≤12 showed the best classification accuracy in this sample correctly classifying 91% of Pass and 41% of Fail-2 cases. Overall, PC indices show good specificity and low sensitivity for exclusive use as PVIs, demonstrating promise for use as adjunctive embedded measures.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/arclin/acr079DOI Listing
December 2011

Concurrent validity of three forced-choice measures of symptom validity.

Appl Neuropsychol 2011 Jan;18(1):27-33

Psychology Section, John D. Dingell VA Medical Center, Detroit, Michigan 48201, USA.

Forced-choice measures of recognition memory are used to assess the validity of an evaluation by using cutoff scores that discriminate individuals demonstrating good effort from those who are intentionally performing suboptimally. The current study evaluated three measures of motivation in a clinical sample of over 150 individuals. The Forced-Choice subtest from the California Verbal Learning Test and the Test of Memory Malingering generated comparable percentages of poor effort at 23% and 21%, respectively, yet they did not have complete concordance. Overall detection of poor performance using the 85% cut score on the three easy subtests from the Medical Symptom Validity Test (MSVT; Green, 2004) fell at 37%. When the MSVT cut score was lowered to 70%, the failure rate dropped to 21%, consistent with the other two measures and embedded measures of effort. The data are discussed in terms of adjusting the MSVT cut score and examining comparability in detection rates across these measures of symptom validity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/09084282.2010.523369DOI Listing
January 2011