Publications by authors named "Robert J Kanser"

4 Publications

  • Page 1 of 1

Performance validity assessment using response time on the Warrington Recognition Memory Test.

Clin Neuropsychol 2020 Feb 18:1-20. Epub 2020 Feb 18.

Department of Psychology, Wayne State University, Detroit, MI, USA.

: The present study tested the incremental utility of response time (RT) on the Warrington Recognition Memory Test - Words (RMT-W) in classifying bona fide versus feigned TBI.: Participants were 173 adults: 55 with moderate to severe TBI, 69 healthy comparisons (HC) instructed to perform their best, and 49 healthy adults coached to simulate TBI (SIM). Participants completed a computerized version of the RMT-W in the context of a comprehensive neuropsychological battery. Groups were compared on RT indices including mean RT (overall, correct trials, incorrect trials) and variability, as well as the traditional RMT-W accuracy score.: Several RT indices differed significantly across groups, although RMT-W accuracy predicted group membership more strongly than any individual RT index. SIM showed longer average RT than both TBI and HC. RT variability and RT for incorrect trials distinguished SIM-HC but not SIM-TBI comparisons. In general, results for SIM-TBI comparisons were weaker than SIM-HC results. For SIM-HC comparisons, classification accuracy was excellent for all multivariable models incorporating RMT-W accuracy with one of the RT indices. For SIM-TBI comparisons, classification accuracies for multivariable models ranged from acceptable to excellent discriminability. In addition to mean RT and RT on correct trials, the ratio of RT on correct items to incorrect items showed incremental predictive value to accuracy.: Findings support the growing body of research supporting the value of combining RT with PVTs in discriminating between verified and feigned TBI. The diagnostic accuracy of the RMT-W can be improved by incorporating RT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2020.1716997DOI Listing
February 2020

Detecting feigned traumatic brain injury with eye tracking during a test of performance validity.

Neuropsychology 2020 Mar 16;34(3):308-320. Epub 2020 Jan 16.

Department of Psychology.

Objective: Eye-tracking is a promising technology to enhance assessment of performance validity. Research has established that ocular behaviors are reliable biomarkers of (un)conscious cognitive processes, and they have distinguished deceptive from honest responding in experimental paradigms. This study examined the incremental utility of eye-tracking on a clinical performance validity test (PVT) in distinguishing adults with verified TBI from adults coached to feign cognitive impairment.

Method: Participants were 49 adults with moderate-to-severe TBI (TBI), 47 healthy adults coached to simulate TBI (SIM), and 67 healthy comparisons providing full effort (HC). A PVT linked to eye-tracking was completed in the context of a full neuropsychological battery.

Results: Kruskal-Wallis tests revealed that eye-tracking indices did not differ among the groups during presentation of stimulus items but did differ during forced-choice trials. Compared to TBI and HC, SIM had significantly more transitions, fixations, and time spent looking at correct and incorrect response options. Logistic regressions and ROC curve analyses showed that accuracy was the best predictor of SIM versus HC. For SIM versus TBI, eye-tracking indices exceeded accuracy in distinguishing the groups. Eye-tracking added incremental predictive value to accuracy for both SIM-HC and SIM-TBI discriminations.

Conclusion: Eye-tracking indicated that persons feigning TBI showed multiple signs of greater cognitive effort than persons with verified TBI and healthy comparisons. In the comparison of greatest interest (SIM vs. TBI) eye-tracking best predicted group status and yielded excellent discrimination when combined with accuracy. Eye-tracking may be an important complement to traditional accuracy scores on PVTs. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1037/neu0000613DOI Listing
March 2020

Detecting malingering in traumatic brain injury: Combining response time with performance validity test accuracy.

Clin Neuropsychol 2019 01 22;33(1):90-107. Epub 2018 Feb 22.

b Department of Physical Medicine and Rehabilitation , Wayne State University , Detroit, MI , USA.

Objective: The present study examined the incremental utility of item-level response time (RT) variables on a traditional performance validity test in distinguishing adults with verified TBI from adults coached to feign neurocognitive impairment.

Method: Participants were 45 adults with moderate to severe TBI, 45 healthy adults coached to feign neurocognitive impairment (SIM), and 61 healthy adult comparisons providing full effort (HC). All participants completed a computerized version of the Test of Memory Malingering (TOMM-C) in the context of a larger test battery. RT variables examined along with TOMM-C accuracy scores included mean RTs (Trial 1, Trial 2, correct and incorrect trials) and RT variability indices.

Results: Several RT indices differed significantly across the groups. In general, SIM produced longer, more variable RTs than HC and TBI. Of the RT indices, average RT for Trial 1 and 2 were the best predictors of group membership; however, classification accuracies were greatly influenced by the groups being compared. Average RT for Trial 1 and 2 showed excellent discrimination of SIM and HC. All RT indices were less successful in discriminating SIM and TBI. Average RT for Trial 1 and 2 added incremental predictive value to TOMM-C accuracy in distinguishing SIM from TBI.

Conclusion: Findings contribute to a limited body of research examining the incremental utility of combining RT with traditional PVTs in distinguishing feigned and bona fide TBI. Findings support the hypothesis that combining RT with TOMM-C accuracy can improve its diagnostic accuracy. Future research with other groups of clinical interest is recommended.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2018.1440006DOI Listing
January 2019

Strategies of successful and unsuccessful simulators coached to feign traumatic brain injury.

Clin Neuropsychol 2017 04 13;31(3):644-653. Epub 2017 Jan 13.

a Department of Psychology , Wayne State University , Detroit MI , USA.

Objective: The present study evaluated strategies used by healthy adults coached to simulate traumatic brain injury (TBI) during neuropsychological evaluation.

Method: Healthy adults (n = 58) were coached to simulate TBI while completing a test battery consisting of multiple performance validity tests (PVTs), neuropsychological tests, a self-report scale of functional independence, and a debriefing survey about strategies used to feign TBI.

Results: "Successful" simulators (n = 16) were classified as participants who failed 0 or 1 PVT and also scored as impaired on one or more neuropsychological index. "Unsuccessful" simulators (n = 42) failed ≥2 PVTs or passed PVTs but did not score impaired on any neuropsychological index. Compared to unsuccessful simulators, successful simulators had significantly more years of education, higher estimated IQ, and were more likely to use information provided about TBI to employ a systematic pattern of performance that targeted specific tests rather than performing poorly across the entire test battery.

Conclusion: Results contribute to a limited body of research investigating strategies utilized by individuals instructed to feign neurocognitive impairment. Findings signal the importance of developing additional embedded PVTs within standard cognitive tests to assess performance validity throughout a neuropsychological assessment. Future research should consider specifically targeting embedded measures in visual tests sensitive to slowed responding (e.g. response time).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13854046.2016.1278040DOI Listing
April 2017