Publications by authors named "Irene D Feurer"

90 Publications

Sex and Gender Disparities in Pretransplant Characteristics and Relationships with Postoperative Outcomes in Liver Transplant Recipients with Alcoholic Liver Disease.

Exp Clin Transplant 2020 11 16;18(6):701-706. Epub 2020 Jun 16.

From the Division of Hepatobiliary Surgery and Liver Transplantation, Section of Surgical Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA.

Objectives: Previous studies of liver transplant recipients have reported discrepancies with regard to gender and/or sex differences but have focused on pretransplant outcomes. Female candidates are less likely to receive liver transplant and more likely to die or be delisted than their male counterparts. Here, we examined differences in men versus women with alcoholic liver disease before liver transplant and the effects of these differences on posttransplant survival.

Materials And Methods: We analyzed the Scientific Registry of Transplant Recipients records of adult, deceased-donor, whole liver transplant recipients with decompensated alcoholic liver disease from 2002 to 2017 to evaluate the effects of gender on survival in 2 alcoholic liver disease cohorts: (a) including and (b) excluding recipients with additional diagnoses. Pretransplant characteristics were compared using chi-square or t tests. Kaplan-Meier and multivariable proportional hazards regression models were used to evaluate the main and covariable-adjusted effects of gender on survival.

Results: Of 13781 transplant recipients with decompensated end-stage liver disease, as defined by Model for End-Stage Liver Disease score ≥ 15, 10924 (79%) were men and 2857 (21%) were women. Women had higher Model for End-Stage Liver Disease scores, higher rates of stage 4 and 5 chronic kidney disease, and were more likely to be on dialysis or ventilator support at time of transplant (all P < .05). Among all recipients, and after adjusting for risk factors, men were approximately 9% more likely than women to experience long-term graft loss (hazard ratio = 1.093; 95% confidence interval, 1.00-1.19; P = .043). However, sex difference was not associated with risk of graft loss among those without additional diagnoses (hazard ratio = 1.09; 95% confidence interval, 0.99-1.21; P = .095).

Conclusions: Although women with alcoholic liver disease who undergo liver transplant have higher severity of illness than their male counterparts, long-term outcomes are comparable.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.6002/ect.2020.0063DOI Listing
November 2020

Comparison of Wait-List Mortality Between Cholangiocarcinoma and Hepatocellular Carcinoma Liver Transplant Candidates.

Liver Transpl 2020 09 21;26(9):1112-1120. Epub 2020 Jul 21.

Department of Surgery, Division of Hepatobiliary Surgery and Liver Transplantation, Vanderbilt University Medical Center, Nashville, TN.

Despite the divergent disease biology of cholangiocarcinoma (CCA) and hepatocellular carcinoma (HCC), wait-list prioritization is identical for both diagnoses. We compared wait-list and posttransplant outcomes between CCA and HCC liver transplantation patients with Model for End-Stage Liver Disease exceptions using Scientific Registry of Transplant Recipients data. The 408 CCA candidates listed between 2003 and mid-2017 were matched to 2 HCC cohorts by listing date (±2 months, n = 816) and by Organ Procurement and Transplantation Network (OPTN) region and date (±6 months, n = 408). Cumulative incidence competing risk regression examined the effects of diagnosis, OPTN region, and center-level CCA listing volume on wait-list removal due to death/being too ill (dropout). Cox models evaluated the effects of diagnosis, OPTN region, center-level CCA volume, and waiting time on graft failure among deceased donor liver transplantation (DDLT) recipients. After adjusting for OPTN region and CCA listing volume (all P ≥ 0.07), both HCC cohorts had a reduced likelihood of wait-list dropout compared with CCA candidates (HCC with period matching only: subdistribution hazard ratio [SHR] = 0.63; 95% CI, 0.43-0.93; P = 0.02 and HCC with OPTN region and period matching: SHR = 0.60; 95% CI, 0.41-0.87; P = 0.007). The cumulative incidence rates of wait-list dropout at 6 and 12 months were 13.2% (95% CI, 10.0%-17.0%) and 23.9% (95% CI, 20.0%-29.0%) for CCA candidates, 7.3% (95% CI, 5.0%-10.0%) and 12.7% (95% CI, 10.0%-17.0%) for HCC candidates with region and listing date matching, and 7.1% (95% CI, 5.0%-9.0%) and 12.6% (95% CI, 10.0%-15.0%) for HCC candidates with listing date matching only. Additionally, HCC DDLT recipients had a 57% reduced risk of graft failure compared with CCA recipients (P < 0.001). Waiting time was unrelated to graft failure (P = 0.57), and there was no waiting time by diagnosis cohort interaction effect (P = 0.47). When identically prioritized, LT candidates with CCA have increased wait-list dropout compared with those with HCC. More granular data are necessary to discern ways to mitigate this wait-list disadvantage and improve survival for patients with CCA.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/lt.25807DOI Listing
September 2020

Geographic Disparities in Access to Simultaneous Pancreas and Kidney Transplant in the Pre- and Post-Pancreas Allocation System Eras.

Transplantation 2020 03;104(3):623-631

Division of Kidney and Pancreas Transplantation, Department of Surgery, Vanderbilt University Medical Center, Nashville, TN.

Background: The 2014 pancreas allocation system (PAS) intended to decrease geographic variability in listing practices for simultaneous pancreas and kidney (SPK) transplant and define eligibility criteria for those with type 2 diabetes mellitus (T2DM). Our primary aims were to evaluate geographic disparities in access to SPK and assess T2DM SPK listings in the pre- and post-PAS eras.

Methods: Adult listings for SPK and kidney transplant (pre-PAS, January 2010 to October 29, 2014; post-PAS, October 30, 2014, to June 2, 2017) were identified in the Scientific Registry of Transplant Recipients. Multivariable logistic regression models tested associations of geography and/or diabetes mellitus type on the likelihood of SPK versus kidney transplant listing pre- and post-PAS. Competing risk models tested the likelihood of SPK transplantation within 2 years of listing for SPK.

Results: Among 41 205 listings (27 393 pre-PAS; 24 439 T2DM), univariate analysis showed reduced percentages for SPK post-PAS (22.1%-20.8%; P = 0.003). After adjusting for patient and center characteristics, geographic disparities declined slightly but persisted post-PAS (era by region interaction P < 0.001). The era by type of diabetes mellitus interaction effect was statistically significant (P = 0.039), reflecting that the proportions of SPK listings for T2DM increased in the post-PAS era (3.4%-3.9%; univariate P = 0.038), while those for type 1 diabetes mellitus remained statistically stable (47.9%-48.4%; univariate P = 0.571). Among people listed for SPK, geographic disparities in the cumulative incidence of transplantation within 2 years declined and the overall likelihood of transplantation increased in the post-PAS era (both P < 0.001).

Conclusions: Geographic disparities in access to SPK declined slightly but persisted post-PAS. With new allocation change proposals and elimination of listing criteria for T2DM, further monitoring is warranted.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/TP.0000000000002850DOI Listing
March 2020

Interpretation of Domain Scores on the EPIC-How Does the Domain Score Translate into Functional Outcomes?

J Urol 2019 12 19;202(6):1150-1158. Epub 2019 Jun 19.

Department of Urology, Patient Advocacy Program, Vanderbilt Ingram Cancer Center, Vanderbilt University Medical Center, Nashville, Tennessee.

Purpose: The EPIC-26 (Expanded Prostate Cancer Index Composite-Short Form) is a validated questionnaire for measuring health related quality of life. However, the relationship between domain scores and functional outcomes remains unclear, leading to potential confusion about expectations after treatment. For instance, does a sexual function domain score of 80 mean that a patient can achieve erection sufficient for intercourse? Consequently we sought to determine the relationship between the domain score and the response to obtaining the best possible outcome for each question.

Materials And Methods: Using data from the CEASAR (Comparative Effectiveness Analysis of Surgery and Radiation) study, a multicenter, prospective study of men diagnosed with localized prostate cancer, we analyzed 11,464 EPIC-26 questionnaires from a total of 2,563 men at baseline through 60 months of followup who were treated with robotic prostatectomy, radiotherapy or active surveillance. We dichotomized every item into its best possible outcome and assessed the percent of men at each domain score who achieved the best result.

Results: For every EPIC-26 item the frequency of the best possible outcome was reported by domain score category. For example, a score of 80 to 100 on sexual function corresponded to 97% of men reporting erections sufficient for intercourse while at a score of 40 to 60 only 28% reported adequate erections. Also, at a score of 80 to 100 on the urinary incontinence domain 93% of men reported rarely or never leaking vs 6% at a score of 61 to 80.

Conclusions: Our findings indicate a novel way to interpret EPIC-26 domain scores, demonstrating large variations in the percent of respondents reporting the best possible outcomes over narrow domain score differences. This information may be valuable when counseling men on treatment options.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/JU.0000000000000392DOI Listing
December 2019

The Changing Face of Liver Transplantation in the United States: The Effect of HCV Antiviral Eras on Transplantation Trends and Outcomes.

Transplant Direct 2019 Mar 20;5(3):e427. Epub 2019 Feb 20.

Division of Hepatobiliary Surgery and Liver Transplantation, Section of Surgical Sciences, Vanderbilt University Medical Center, Nashville, TN.

Background: Hepatitis C virus (HCV) cirrhosis is the leading indication for liver transplantation in the United States, although nonalcoholic steatohepatitis (NASH) is on the rise. Increasingly effective HCV antivirals are available, but their association with diagnosis-specific liver transplantation rates and early graft survival is not known.

Methods: The Scientific Registry of Transplant Recipients database records were retrospectively stratified by HCV antiviral era: interferon (2003-2010), protease inhibitors (2011-2013), and direct-acting antivirals (2014 to present). Kaplan-Meier, χ, and multivariable Cox proportional hazards regression models evaluated the effects of antiviral era and etiology of liver disease on transplantation rates and graft survival over 3 years.

Results: Liver transplants for HCV decreased (35.3% to 23.6%), whereas those for NASH and alcoholic liver disease increased (5.8% to 16.5% and 15.6% to 24.0%) with each advancing era (all < 0.05). Early graft survival improved with each advancing era for HCV but not for hepatitis B virus, NASH, or alcoholic liver disease (multivariable model era by diagnosis interaction < 0.001). Era-specific multivariable models demonstrated that the risk of early graft loss for NASH was 22% lower than for HCV in the interferon era (hazard ratio, 0.78; 95% confidence interval, 0.64-0.96; = 0.02) but risks associated with these diagnoses did not differ significantly in the protease inhibitor ( = 0.06) or direct-acting antiviral eras ( = 0.08).

Conclusions: Increasing effectiveness of HCV antivirals corresponds with decreased rates of liver transplantation for HCV and improved early graft survival. As the rates of liver transplant for NASH continue to increase, focus will be needed on the prevention and effective therapies for this disease.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/TXD.0000000000000866DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6411219PMC
March 2019

A2 to B Kidney Transplantation in the Post-Kidney Allocation System Era: A 3-year Experience with Anti-A Titers, Outcomes, and Cost.

J Am Coll Surg 2019 04 30;228(4):635-641. Epub 2019 Jan 30.

Division of Kidney and Pancreas Transplantation, Department of Surgery, Vanderbilt University Medical Center, Nashville, TN.

Background: The new kidney allocation systems (KAS) instituted December 2014 permitted A2 to B deceased donor kidney transplantation (DDKTx) to improve access and reduce disparities in wait time for minorities. A recent United Network for Organ Sharing (UNOS) analysis, however, indicated only 4.5% of B candidates were registered for A2 kidneys. Cited barriers to A2 to B DDKTx include titer thresholds, patient eligibility, and increased costs. There are little published data on post-transplantation anti-A titers or outcomes of A2 to B DDKTx since this allocation change.

Study Design: We conducted a retrospective, single center, cohort analysis of 29 consecutive A2 to B and 50 B to B DDKTx from December 2014 to December 2017. Pre- and postoperative anti-A titers were monitored prospectively. Outcomes included post-transplant anti-A titers, patient and graft survival, renal function, and hospital costs.

Results: African Americans comprised 72% of the A2 to B and 60% of the B to B group. There was no difference in mean wait time (58.8 vs 70.8 months). Paired tests indicated that anti-A IgG titers in A2 to B DDKTx were increased at discharge (p = 0.001) and at 4 weeks (p = 0.037). There were no significant differences in patient or graft survival, serum creatinine (SCr), or estimated glomerular filtration rate (eGFR), but the trajectories of SCr and eGFR differed between groups over the follow-up period. A2 to B had significantly higher mean transplant total hospital costs ($114,638 vs $91,697, p < 0.001) and hospital costs net organ acquisition costs ($42,356 vs $20,983, p < 0.001).

Conclusions: Initial experience under KAS shows comparable outcomes for A2 to B vs B to B DDKTx. Anti-A titers increased significantly post-transplantation, but did not adversely affect outcomes. Hospital costs were significantly higher with A2 to B DDKTx. Transplant programs, regulators, and payors will need to weigh improved access for minorities with increased costs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jamcollsurg.2018.12.023DOI Listing
April 2019

Long-Term Physical HRQOL Decreases After Single Lung as Compared With Double Lung Transplantation.

Ann Thorac Surg 2018 12 16;106(6):1633-1639. Epub 2018 Aug 16.

Department of Thoracic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee; Transplant Center, Vanderbilt University Medical Center, Nashville, Tennessee.

Background: Single lung transplantation (SLT) and double lung transplantation (DLT) are associated with differences in morbidity and mortality, although the effects of transplant type on patient-reported outcomes are not widely reported and conclusions have differed. Previous studies compared mean health-related quality of life (HRQOL) scores but did not evaluate potentially different temporal trajectories in the context of longitudinal follow-up. To address this uncertainty, this study was designed to evaluate longitudinal HRQOL after SLT and DLT with the hypothesis that temporal trajectories differ between SLT and DLT.

Methods: Patients transplanted at a single institution were eligible to be surveyed at 1 month, 3 months, 6 months, and then annually after transplant using the Short Form 36 Health Survey, with longitudinal physical component summary (PCS) and mental component summary (MCS) scores as the primary outcomes. Multivariable mixed-effects models were used to evaluate the effects of transplant type and time posttransplant on longitudinal PCS and MCS after adjusting age, diagnosis, rejection, Lung Allocation Score quartile, and intubation duration. Time by transplant type interaction effects were used to test whether the temporal trajectories of HRQOL differ between SLT and DLT recipients. HRQOL scores were referenced to general population norms (range, 40 to 60; mean, 50 ± 10) using accepted standards for a minimally important difference (½ SD, 5 points).

Results: Postoperative surveys (n = 345) were analyzed for 136 patients (52% male, 23% SLT, age 52 ± 13 years, LAS 42 ± 12, follow-up 37 ± 29 months [range, 0.6 to 133]) who underwent lung transplantation between 2005 and 2016. After adjusting for model covariates, overall posttransplant PCS scores have a significant downward trajectory (p = 0.015) whereas MCS scores remain stable (p = 0.593), with both averaging within general population norms. The time by transplant type interaction effect (p = 0.002), however, indicate that posttransplant PCS scores of SLT recipients decline at a rate of 2.4 points per year over the total observation period compared to DLT. At approximately 60 months, the PCS scores of SLT recipients, but not DLT recipients, fall below general population norms.

Conclusions: The trajectory of physical HRQOL in patients receiving SLT declines over time compared with DLT, indicating that, in the longer term, SLT recipients are more likely to have physical HRQOL scores that fall substantively below general population norms. Physical HRQOL after 5 years may be a consideration for lung allocation and patient counseling regarding expectations when recommending SLT or DLT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.athoracsur.2018.06.072DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6240480PMC
December 2018

Directed solutions to address differences in access to liver transplantation.

Am J Transplant 2018 11 24;18(11):2670-2678. Epub 2018 May 24.

Department of Surgery and the Transplant Center, Vanderbilt University Medical Center, Nashville, TN, USA.

The United Network for Organ Sharing recently altered current liver allocation with the goal of decreasing Model for End-Stage Liver Disease (MELD) variance at transplant. Concerns over these and further planned revisions to policy include predicted decrease in total transplants, increased flying and logistical complexity, adverse impact on areas with poor quality health care, and minimal effect on high MELD donor service areas. To address these issues, we describe general approaches to equalize critical transplant metrics among regions and determine how they alter MELD variance at transplant and organ supply to underserved communities. We show an allocation system that increases minimum MELD for local allocation or preferentially directs organs into areas of need decreases MELD variance. Both models have minimal adverse effects on flying and total transplants, and do not disproportionately disadvantage already underserved communities. When combined together, these approaches decrease MELD variance by 28%, more than the recently adopted proposal. These models can be adapted for any measure of variance, can be combined with other proposals, and can be configured to automatically adjust to changes in disease incidence as is occurring with hepatitis C and nonalcoholic fatty liver disease.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/ajt.14889DOI Listing
November 2018

Improved Health-Related Quality of Life in a Phase 3 Islet Transplantation Trial in Type 1 Diabetes Complicated by Severe Hypoglycemia.

Diabetes Care 2018 05 21;41(5):1001-1008. Epub 2018 Mar 21.

Diabetes Research Institute, Miller School of Medicine, University of Miami, Miami, FL.

Objective: Attaining glycemic targets without severe hypoglycemic events (SHEs) is a challenging treatment goal for patients with type 1 diabetes complicated by impaired awareness of hypoglycemia (IAH). The CIT Consortium Protocol 07 (CIT-07) trial showed islet transplantation to be an effective treatment for subjects with IAH and intractable SHEs. We evaluated health-related quality of life (HRQOL), functional health status, and health utility before and after pancreatic islet transplantation in CIT-07 trial participants.

Research Design And Methods: Four surveys, the Diabetes Distress Scale (DDS), the Hypoglycemic Fear Survey (HFS), the Short Form 36 Health Survey (SF-36), and the EuroQoL 5 Dimensions (EQ-5D), were administered repeatedly before and after islet transplantation. Summary statistics and longitudinal modeling were used to describe changes in survey scores from baseline and to characterize change in relation to a minimally important difference (MID) threshold of half an SD.

Results: Improvements in condition-specific HRQOL met the MID threshold. Reductions from baseline in the DDS total score and its four DDS subscales (all ≤ 0.0013) and in the HFS total score and its two subscales (all < 0.0001) were observed across all time points. Improvements were observed after both 1 and 2 years for the EQ-5D visual analog scale (both < 0.0001).

Conclusions: In CIT-07, 87.5% of the subjects achieved the primary end point of freedom from SHE along with glycemic control (HbA <7% [<53 mmol/mol]) at 1 year post-initial islet transplantation. The same subjects reported consistent, statistically significant, and clinically meaningful improvements in condition-specific HRQOL as well as self-assessments of overall health.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2337/dc17-1779DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5911786PMC
May 2018

Increasing kidney donor profile index sequence does not adversely affect medium-term health-related quality of life after kidney transplantation.

Clin Transplant 2018 Apr 30;32(4):e13212. Epub 2018 Mar 30.

Department of Surgery, Division of Kidney and Pancreas Transplantation, Vanderbilt University Medical Center, Nashville, TN, USA.

Background: The United Network for Organ Sharing system allocates deceased donor kidneys based on the kidney donor profile index (KDPI), stratified as sequences (A ≤ 20%, B > 20-<35%, C ≥ 35-≤85%, and D > 85%), with increasing KDPI associated with decreased graft survival. While health-related quality of life (HRQOL) may improve after transplantation, the effect of donor kidney quality, reflected by KDPI sequence, on post-transplant HRQOL has not been reported.

Methods: Health-related quality of life was measured using the eight scales and physical and mental component summaries (PCS, MCS) of the SF-36 Health Survey. Multivariable mixed effects models that adjusted for age, gender, rejection, and previous transplant and analysis of variance methods tested the effects of time and KDPI sequence on post-transplant HRQOL.

Results: A total of 141 waitlisted adults and 505 recipients (>1700 observations) were included. Pretransplant PCS and MCS averaged, respectively, slightly below and within general population norms (GPN; 40-60). At 31 ± 26 months post-transplant, average PCS (41 ± 11) and MCS (51 ± 11), overall and within each KDPI sequence, were within GPN. KDPI sequence was not related to post-transplant HRQOL (P > .134) or its trajectory (interaction P > .163).

Conclusion: Increasing KDPI does not adversely affect the medium-term values and trajectories of HRQOL after kidney transplantation. This may reassure patients and centers when considering using high KDPI kidneys.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/ctr.13212DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5933873PMC
April 2018

Implementation of telehealth is associated with improved timeliness to kidney transplant waitlist evaluation.

J Telemed Telecare 2018 Aug 26;24(7):485-491. Epub 2017 Jun 26.

1 Renal Transplant, US Department of Veterans Affairs Hospital, Tennessee Valley, Nashville, TN, USA.

Introduction The United States Department of Veterans Affairs (VA) National Transplant Program has made efforts to improve access by introducing Web-based referrals and telehealth. The aims of this study were to describe the programmatic implementation and evaluate the effectiveness of new technology on the timeliness to kidney transplant evaluation at a VA medical centre. Methods Between 1 January 2009 and 31 May 2016, 835 patients were approved for evaluation. Monthly data were summarized as: number of applications, median days to evaluation, and median percentage of evaluations that occurred within 30 days. Temporal trends were analysed using non-parametric comparisons of medians between three eras: Pre Web-based submission, Web-based submission, and Web-based submission with videoconference (VC) telehealth. Results The number of applications did not vary between eras ( p = 0.353). The median time to evaluation and the median percentage of patients with appointments within 30 days improved significantly in the Web-based submission with VC era when compared with the Web-based and Pre Web-based eras (37 vs. 260 and 116 days, respectively, p < 0.001; 100% vs. 8% and 0%, respectively, p < 0.001). Discussion We have been able to markedly improve the timeliness to kidney transplant waitlist evaluation with the addition of telehealth.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/1357633X17715526DOI Listing
August 2018

Patient-Reported Outcome Measures in Upper Airway-Related Dyspnea: A Systematic Review.

JAMA Otolaryngol Head Neck Surg 2017 08;143(8):824-831

Division of Otolaryngology, Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison.

Importance: Patient-reported outcome (PRO) measures address the need for patient-centered data and are now used in diverse clinical, research, and policy pursuits. They are important in conditions causing upper airway-related dyspnea in which the patient's reported experience and physiological data can be discrepant.

Objectives: To perform a systematic review of the literature on upper airway dyspnea-related PRO measures and to rigorously evaluate each measure's developmental properties, validation, and applicability.

Evidence Review: This study strictly adhered to Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guidelines. MEDLINE via the PubMed interface, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), and the Health and Psychosocial Instruments (HaPI) database were searched using relevant vocabulary terms and key terms related to PRO measures and upper airway-related dyspnea. Three investigators performed abstract review, and 2 investigators independently performed full-text review by applying an established checklist to evaluate the conceptual model, content validity, reliability, construct validity, scoring and interpretability, and respondent burden and presentation of each identified instrument. The initial literature search was conducted in November 2014 and was updated in April 2016.

Findings: Of 1269 studies reviewed, 3 upper airway-related dyspnea PRO measures met criteria for inclusion. One PRO measure was designed de novo to assess upper airway-related dyspnea symptoms and monitor treatment outcomes, while 2 were adapted from established instruments designed for lower airway disease. Measurement properties and psychometric characteristics differed, and none met all checklist criteria. Two met a criterion in each of 7 domains evaluated. Two demonstrated test-retest and internal consistency reliability, and 2 showed that their scores were responsive to change. Thematic deficiencies in current upper airway-related dyspnea PRO measures are lack of patient involvement in item development (content validity), plan for interpretation, and literacy level assessments.

Conclusions And Relevance: PRO measures are critical in the assessment of patients with upper airway-related dyspnea. Three instruments with disparate developmental rigor have been designed or adapted to assess this construct. Care must be taken to understand the measurement characteristics and contextual relevance before applying these PRO measures for clinical, research, or quality initiatives.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamaoto.2017.0348DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5604091PMC
August 2017

An Outcome-Based Approach to Assign MELD Exception Points for Patients With Hepatocellular Cancer.

Transplantation 2017 09;101(9):2056-2061

1 Department of Surgery, Vanderbilt Transplant Center, Vanderbilt, University Medical Center, Nashville, TN. 2 Department of Biostatistics, Vanderbilt University School of Medicine, Nashville, TN.

Background: Current Model for End-Stage Liver Disease (MELD) exception points provided to patients with hepatocellular cancer (HCC) are not based on outcome data and advantage these patients compared to those listed based on laboratory values (LABMELD). We sought to develop a data-based assignment for exception points for patients with HCC that equalizes outcomes among HCC and LABMELD patients.

Methods: We used Scientific Registry of Transplant Recipients data to compare patients listed with HCC who received exception points versus patients listed with LABMELD. Nation- and region-specific data were examined for (1) a composite outcome for adverse events of death, delisting, or becoming ineligible for transplant; and (2) transplant rate. We also determined MELD progression rates for LABMELD patients. Candidates listed with LABMELD scores were compared with those listed with 22 exception points for HCC (HCC22) to determine the LABMELD for which statistical parity was achieved for our composite outcome.

Results: HCC22 candidates time to adverse event were comparable to LABMELD scores of 16 (LABMELD16) candidates (range, 15-19), whereas time to transplant was comparable to LABMELD22 candidates (range, 21-23). LABMELD22 candidates had 2.1 times greater risk of adverse event compared with HCC22 (95% confidence interval, 1.9-2.4; range, 1.5-2.4). Progression among LABMELD16 candidates whose scores did not improve was similar across regions and averaged 0.94 points/month (95% confidence interval, 0.88-0.99, range 0.80-1.04).

Conclusions: To equalize the occurrence of an adverse outcome, the proper listing MELD for patients with HCC is 16, with approximately 1 additional point/month. These results provide a data-driven algorithm to increase fairness in listing priority.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/TP.0000000000001812DOI Listing
September 2017

Voice-Related Patient-Reported Outcome Measures: A Systematic Review of Instrument Development and Validation.

J Speech Lang Hear Res 2017 01;60(1):62-88

Vanderbilt Evidence-Based Practice Center, Nashville, TNDepartment of Health Policy, Vanderbilt University Medical Center, Nashville, TN.

Purpose: The purpose of this study was to perform a comprehensive systematic review of the literature on voice-related patient-reported outcome (PRO) measures in adults and to evaluate each instrument for the presence of important measurement properties.

Method: MEDLINE, the Cumulative Index of Nursing and Allied Health Literature, and the Health and Psychosocial Instrument databases were searched using relevant vocabulary terms and key terms related to PRO measures and voice. Inclusion and exclusion criteria were developed in consultation with an expert panel. Three independent investigators assessed study methodology using criteria developed a priori. Measurement properties were examined and entered into evidence tables.

Results: A total of 3,744 studies assessing voice-related constructs were identified. This list was narrowed to 32 PRO measures on the basis of predetermined inclusion and exclusion criteria. Questionnaire measurement properties varied widely. Important thematic deficiencies were apparent: (a) lack of patient involvement in the item development process, (b) lack of robust construct validity, and (c) lack of clear interpretability and scaling.

Conclusions: PRO measures are a principal means of evaluating treatment effectiveness in voice-related conditions. Despite their prominence, available PRO measures have disparate methodological rigor. Care must be taken to understand the psychometric and measurement properties and the applicability of PRO measures before advocating for their use in clinical or research applications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1044/2016_JSLHR-S-16-0022DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5533561PMC
January 2017

A comparison of patency and interventions in thigh versus Hemodialysis Reliable Outflow grafts for chronic hemodialysis vascular access.

J Vasc Surg 2016 Nov 18;64(5):1392-1399. Epub 2016 Jul 18.

Department of Surgery, Vanderbilt University Medical Center, Nashville, Tenn.

Objective: With improvements in medical management and survival of patients with end-stage renal disease, maintaining durable vascular access is increasingly challenging. This study compared primary, assisted primary, and secondary patency, and procedure-specific complications, and evaluated whether the number of interventions to maintain or restore patency differed between prosthetic femoral-femoral looped inguinal access (thigh) grafts and Hemodialysis Reliable Outflow (HeRO; Hemosphere Inc, Minneapolis, Minn) grafts.

Methods: A single-center, retrospective, intention-to-treat analysis was conducted of consecutive thigh and HeRO grafts placed between May 2004 and June 2015. Medical history, interventions to maintain or restore patency, and complications were abstracted from the electronic medical record. Data were analyzed using parametric and nonparametric statistical tests, Kaplan-Meier survival methods, and multivariable proportional hazards regression and logistic regression.

Results: Seventy-six (43 thigh, 33 HeRO) grafts were placed in 61 patients (54% male; age 53 [standard deviation, 13] years). Median follow-up time in the intention-to-treat analysis was 21.2 months (min, 0.0; max, 85.3 months) for thigh grafts and 6.7 months (min, 0.0; max, 56.3 months) for HeRO grafts (P = .02). The groups were comparable for sex, age, coronary artery disease, diabetes mellitus, peripheral vascular disease, and smoking history (all P ≥ .12). One thigh graft (2%) and five HeRO (15%) grafts failed primarily. In the intention-to-treat analysis, patency durations were significantly longer in the thigh grafts (all log-rank P ≤ .01). Point estimates of primary patency at 6 months, 1 year, and 3 years were 61%, 46%, and 4% for the thigh grafts and 25%, 15%, and 6% for the HeRO grafts. Point estimates of assisted primary patency at 6 months, 1 year, and 3 years were 75%, 66%, and 54% for the thigh grafts and 41%, 30%, and 10% for the HeRO grafts. Point estimates of secondary patency at 6 months, 1 year, and 3 years were 88%, 88%, and 70% for the thigh grafts and 53%, 43%, and 12% for the HeRO grafts. There were no differences in ischemic (P = .63) or infectious (P = .79) complications between the groups. Multivariable logistic regression demonstrated that after adjusting for follow-up time, HeRO grafts were associated with an increased number of interventions (P = .03).

Conclusions: Thigh grafts have significantly better primary, assisted primary, and secondary patency compared with HeRO grafts. There is no significant difference between thigh grafts and HeRO grafts in ischemic or infectious complications. Our logistic regression model demonstrated an association between HeRO grafts and an increased number of interventions to maintain or restore patency. Although HeRO grafts may extend the use of the upper extremity, thigh grafts provide a more durable option for chronic hemodialysis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jvs.2016.04.055DOI Listing
November 2016

Patient-Reported Outcome Measures Related to Laryngopharyngeal Reflux: A Systematic Review of Instrument Development and Validation.

Otolaryngol Head Neck Surg 2016 12 23;155(6):923-935. Epub 2016 Aug 23.

Division of Gastroenterology and Hepatology, Vanderbilt University Medical Center, Nashville, Tennessee, USA.

Objectives: Patient-reported outcome (PRO) measures are often used to diagnose laryngopharyngeal reflux (LPR) and monitor treatment outcomes in clinical and research settings. The present systematic review was designed to identify currently available LPR-related PRO measures and to evaluate each measure's instrument development, validation, and applicability.

Data Sources: MEDLINE via PubMed interface, CINAHL, and Health and Psychosocial Instrument databases were searched with relevant vocabulary and key terms related to PRO measures and LPR.

Review Methods: Three investigators independently performed abstract review and full text review, applying a previously developed checklist to critically assess measurement properties of each study meeting inclusion criteria.

Results: Of 4947 studies reviewed, 7 LPR-related PRO measures (publication years, 1991-2010) met criteria for extraction and analysis. Two focused on globus and throat symptoms. Remaining measures were designed to assess LPR symptoms and monitor treatment outcomes in patients. None met all checklist criteria. Only 2 of 7 used patient input to devise item content, and 2 of 7 assessed responsiveness to change. Thematic deficiencies in current LPR-related measures are inadequately demonstrated: content validity, construct validity, plan for interpretation, and literacy level assessment.

Conclusion: Laryngopharyngeal reflux is often diagnosed according to symptoms. Currently available LPR-related PRO measures used to symptomatically identify suspected LPR patients have disparate developmental rigor and important methodological deficiencies. Care should be exercised to understand the measurement characteristics and contextual relevance before applying these PRO measures for clinical, research, or quality initiatives.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/0194599816664330DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5639324PMC
December 2016

Early and Sustained Reduction in Donor-Specific Antibodies in Desensitized Living Donor Kidney Transplant Recipients: A 3-Year Prospective Study.

Transplant Direct 2016 Feb 11;2(2):e62. Epub 2016 Jan 11.

Department of Surgery, Vanderbilt University Medical Center, Nashville, TN.

Background: Desensitization with IVIG and rituximab allows acceptable graft survival in sensitized kidney transplant recipients with preexisting donor-specific antibodies (DSAs) and a positive crossmatch. There is little published data reporting the durability of DSA removal in kidney transplant recipients treated with IVIG and rituximab.

Methods: We conducted a 3-year prospective DSA monitoring study in living donor kidney recipients with preexisting DSA to assess the durability of DSA removal after a perioperative protocol of IVIG and rituximab. All recipients had flow crossmatch titers less than 1:32. Data were analyzed using linear mixed effects models and Kaplan-Meier survival methods.

Results: The longitudinal database comprised 210 mean fluorescence intensity (MFI) determinations. Forty-two DSAs were identified in 29 patients. Pretreatment MFI averaged 4715 ± 3962 (range, 947-20 129). At 1 month posttransplant, 18 patients (62%) had a complete response (MFI < 1000) and an additional 9 patients (31%) had a partial response (MFI reduced but >1000). There was a 46% reduction (P < 0.001) in DSA MFI at 1 month posttransplant that was sustained throughout the 3-year follow-up period and was observed for both class I and II DSAs regardless of pretreatment MFI levels. With a mean posttransplant follow-up of 1048 ± 574 days, 3-year patient and graft survivals were 95% and 90%. Four patients (14%) had acute rejection between days 125 and 560.

Conclusions: Desensitization with IVIG and rituximab results in early and sustained DSA removal over a 3-year posttransplant period in living donor kidney transplant recipients with pretransplant DSA and a positive crossmatch, excellent patient and graft survivals and a low incidence of acute rejection.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/TXD.0000000000000570DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4946491PMC
February 2016

Checklist to operationalize measurement characteristics of patient-reported outcome measures.

Syst Rev 2016 08 2;5(1):129. Epub 2016 Aug 2.

Center for Surgical Quality and Outcomes Research, Institute for Medicine and Public Health, Vanderbilt University Medical Center, Nashville, 37232, TN, USA.

Background: The purpose of this study was to advance a checklist of evaluative criteria designed to assess patient-reported outcome (PRO) measures' developmental measurement properties and applicability, which can be used by systematic reviewers, researchers, and clinicians with a varied range of expertise in psychometric measure development methodology.

Methods: A directed literature search was performed to identify original studies, textbooks, consensus guidelines, and published reports that propose criteria for assessing the quality of PRO measures. Recommendations from these sources were iteratively distilled into a checklist of key attributes. Preliminary items underwent evaluation through 24 cognitive interviews with clinicians and quantitative researchers. Six measurement theory methodological novices independently applied the final checklist to assess six PRO measures encompassing a variety of methods, applications, and clinical constructs. Agreement between novice and expert scores was assessed.

Results: The distillation process yielded an 18-item checklist with six domains: (1) conceptual model, (2) content validity, (3) reliability, (4) construct validity, (5) scoring and interpretation, and (6) respondent burden and presentation. With minimal instruction, good agreement in checklist item ratings was achieved between quantitative researchers with expertise in measurement theory and less experienced clinicians (mean kappa 0.70; range 0.66-0.87).

Conclusions: We present a simplified checklist that can help guide systematic reviewers, researchers, and clinicians with varied measurement theory expertise to evaluate the strengths and weakness of candidate PRO measures' developmental properties and the appropriateness for specific applications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s13643-016-0307-4DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4971647PMC
August 2016

Patient-reported outcomes in liver transplant recipients with hepatocellular carcinoma.

Clin Transplant 2016 09 18;30(9):1036-45. Epub 2016 Jul 18.

Department of Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.

Background: The effect of awarding MELD exception points for hepatocellular carcinoma (HCC) on patient-reported outcomes (PROs) is unknown. We evaluated the physical and mental health-related quality of life (HRQOL) and symptoms of anxiety and depression in liver transplant recipients with HCC compared to patients without HCC.

Methods: The single-center sample measured PROs before and after transplant, which included 1521 multisurvey measurement points among 502 adults (67% male, 28% HCC, follow-up time: <1-131 months). Data were analyzed using multivariable mixed-effects models.

Results: Longitudinal PRO values did not differ between persons who received HCC exception points and those who did not have HCC. Patients with HCC who did not receive exception points had reduced physical HRQOL (P=.016), a late decline in mental HRQOL, and delayed reduction in anxiety (time-by-outcome interaction P<.050) compared to patients with HCC who received exception points.

Conclusion: Transplant recipients who received HCC exception points had PROs that were comparable to those of patients without HCC, and reported better physical HRQOL and reduced symptoms of anxiety compared to patients with HCC who did not receive exception points. These analyses demonstrate the impact of HCC exception points on PROs, and may help inform policy regarding HCC exception point allocation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/ctr.12785DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5273862PMC
September 2016

A2 incompatible kidney transplantation does not adversely affect graft or patient survival.

Clin Transplant 2016 05 17;30(5):589-97. Epub 2016 Mar 17.

Department of Surgery, Division of Kidney and Pancreas Transplantation, Vanderbilt University Medical Center, Nashville, TN, USA.

Background: The new United Network for Organ Sharing (UNOS) kidney allocation system (KAS) incorporates A2 and A2B to B transplantation to reduce wait times for blood group B candidates. Few studies have employed multicenter data or comprehensively defined donor-to-recipient ABO classification systems.

Methods: We retrospectively analyzed UNOS data from 1987-2013 to evaluate the effect of A2 incompatible (A2i) kidney transplantation on graft and patient survival. Records of 314 056 adults (340 150 transplants) were classified as A2i (560 transplants in A2 to B or O, A2B to B) or compatible. Methods included Kaplan-Meier survival and multivariable Cox proportional hazards regression.

Results: Graft survival after A2i transplant (median = 116 months) did not differ (log-rank p ≥ 0.101) from any compatible class (medians = 106-119 months); there was no effect of A2i on patient survival (log-rank p ≥ 0.286). After adjusting for age, race, donor type, pancreas, or previous kidney transplant, A2i was not associated with graft (p ≥ 0.263) or patient (p ≥ 0.060) survival in this largest cohort to date.

Conclusions: A2i kidney transplantation does not adversely affect graft or patient survival. A2i kidney transplantation has been included in the new KAS and represents a viable option for transplant centers to increase transplant volume and reduce wait times for disadvantaged B waitlist recipients.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/ctr.12724DOI Listing
May 2016

Patient reported outcomes after incisional hernia repair-establishing the ventral hernia recurrence inventory.

Am J Surg 2016 Jul 31;212(1):81-8. Epub 2015 Jul 31.

Department of Surgery, Vanderbilt University Medical Center, Nashville, 1161 Medical Center Drive, D-5203 Medical Center North, TN 37232, USA.

Background: Assessing incisional hernia recurrence typically requires a clinical encounter. We sought to determine if patient-reported outcomes (PROs) could detect long-term recurrence.

Methods: Adult patients 1 to 5 years after incisional hernia repair were prospectively asked about recurrence, bulge, and pain at the original repair site. Using dynamic abdominal sonography for hernia to detect recurrence, performance of each PRO was determined. Multivariable regression was used to evaluate PRO association with recurrence.

Results: Fifty-two patients enrolled with follow-up time 46 ± 13 months. A patient-reported bulge was 85% sensitive, and 81% specific to detect recurrence. Patients reporting no bulge and no pain had 0% chance of recurrence. In multivariable analysis, patients reporting a bulge were 18 times more likely to have a recurrence than those without (95% confidence interval, 3.7 to 90.0; P < .001).

Conclusions: This preliminary study demonstrates that PROs offer a promising means of detecting long-term recurrence after incisional hernia repair, which can help facilitate quality improvement and research efforts.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.amjsurg.2015.06.007DOI Listing
July 2016

Validity and reliability of the Patient-Reported Arthralgia Inventory: validation of a newly-developed survey instrument to measure arthralgia.

Patient Relat Outcome Meas 2015 28;6:205-14. Epub 2015 Jul 28.

Medical Social Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA.

Background: There is a need for a survey instrument to measure arthralgia (joint pain) that has been psychometrically validated in the context of existing reference instruments. We developed the 16-item Patient-Reported Arthralgia Inventory (PRAI) to measure arthralgia severity in 16 joints, in the context of a longitudinal cohort study to assess aromatase inhibitor-associated arthralgia in breast cancer survivors and arthralgia in postmenopausal women without breast cancer. We sought to evaluate the reliability and validity of the PRAI instrument in these populations, as well as to examine the relationship of patient-reported morning stiffness and arthralgia.

Methods: We administered the PRAI on paper in 294 women (94 initiating aromatase inhibitor therapy and 200 postmenopausal women without breast cancer) at weeks 0, 2, 4, 6, 8, 12, 16, and 52, as well as once in 36 women who had taken but were no longer taking aromatase inhibitor therapy.

Results: Cronbach's alpha was 0.9 for internal consistency of the PRAI. Intraclass correlation coefficients of test-retest reliability were in the range of 0.87-0.96 over repeated PRAI administrations; arthralgia severity was higher in the non-cancer group at baseline than at subsequent assessments. Women with joint comorbidities tended to have higher PRAI scores than those without (estimated difference in mean scores: -0.3, 95% confidence interval [CI] -0.5, -0.2; P<0.001). The PRAI was highly correlated with the Functional Assessment of Cancer Therapy-Endocrine Subscale item "I have pain in my joints" (reference instrument; Spearman r range: 0.76-0.82). Greater arthralgia severity on the PRAI was also related to decreased physical function (r=-0.47, 95% CI -0.55, -0.37; P<0.001), higher pain interference (r=0.65, 95% CI 0.57-0.72; P<0.001), less active performance status (estimated difference in location (-0.6, 95% CI -0.9, -0.4; P<0.001), and increased morning stiffness duration (r=0.62, 95% CI 0.54-0.69; P<0.0001).

Conclusion: We conclude that the psychometric properties of the PRAI are satisfactory for measuring arthralgia severity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2147/PROM.S47997DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4524452PMC
August 2015

Surgeons, ERCP, and laparoscopic common bile duct exploration: do we need a standard approach for common bile duct stones?

Surg Endosc 2016 Feb 20;30(2):414-423. Epub 2015 Jun 20.

Department of Surgery, Vanderbilt University Medical Center, D-5203 MCN, VUMC, 1161 Medical Center Drive, Nashville, TN, 37232, USA.

Background: Variation exists in the management of choledocholithiasis (CDL). This study evaluated associations between demographic and practice-related characteristics and CDL management.

Methods: A 22-item, web-based survey was administered to US general surgeons. Respondents were classified into metropolitan or nonmetropolitan groups by zip code. Univariate tests and multivariable logistic regression were used to determine factors associated with CDL management preferences.

Results: The survey was sent to 32,932 surgeons; 9902 performed laparoscopic cholecystectomy within the last year; 750 of 771 respondents had a valid US zip code and were included in the analysis. Mean practice time was 18 ± 10 years, 87% were male, and 83% practiced in a metropolitan area. For preoperatively known CDL, 86% chose preoperative endoscopic retrograde cholangiopancreatography (ERCP). Those in metropolitan areas were more likely to select preoperative ERCP than those in nonmetropolitan areas (88 vs. 79%, p < 0.001). For CDL discovered intraoperatively, 30% selected laparoscopic common bile duct exploration (LCBDE) as their preferred method of management with no difference between metropolitan and nonmetropolitan areas (30 vs. 26%, p = 0.335). The top reasons for not performing LCBDE were: having a reliable ERCP proceduralist available, lack of equipment, and lack of comfort performing LCBDE. Factors associated with preoperative ERCP were: metropolitan status, selective intraoperative cholangiography (IOC), and availability of a reliable ERCP proceduralist. Those who perform selective IOC were 70% less likely to prefer LCBDE (OR 0.32, 95% CI 0.18-0.57, p < 0.001). Those with a reliable ERCP proceduralist available were 90% less likely to prefer LCBDE (OR 0.10, 95% CI 0.04-0.26, p < 0.001).

Conclusions: The majority of respondents preferred ERCP for the management of CDL. Having a reliable ERCP proceduralist available, use of selective IOC, and metropolitan status were independently associated with preoperative ERCP. Postoperative ERCP was preferred for managing intraoperatively discovered CDL. Many surgeons are uncomfortable performing LCBDE, and increased training may be needed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00464-015-4273-zDOI Listing
February 2016

Validation of the Written Administration of the Short Literacy Survey.

J Health Commun 2015 14;20(7):835-42. Epub 2015 Jun 14.

a Vanderbilt Transplant Center , Vanderbilt University Medical Center , Nashville , Tennessee , USA.

Most health literacy assessments are time consuming and administered verbally. Written self-administration of measures may facilitate more widespread assessment of health literacy. This study aimed to determine the intermethod reliability and concurrent validity of the written administration of the 3 subjective health literacy questions of the Short Literacy Survey (SLS). The Rapid Estimate of Adult Literacy in Medicine (REALM) and the shortened test of Functional Health Literacy in Adults (S-TOFHLA) were the reference measures of health literacy. Two hundred ninety-nine participants completed the written and verbal administrations of the SLS from June to December 2012. Intermethod reliability was demonstrated when (a) the written and verbal SLS score did not differ and (b) written and verbal scores were highly correlated. The written items were internally consistent (Cronbach's α = .733). The written total score successfully identified persons with sixth-grade equivalency or less for literacy on the REALM (AUROC = 0.753) and inadequate literacy on the S-TOFHLA (AUROC = 0. 869). The written administration of the SLS is reliable, valid, and is effective in identifying persons with limited health literacy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/10810730.2015.1018572DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4698961PMC
December 2015

Understanding Patient Barriers to Kidney Transplant Evaluation.

Transplantation 2015 Jul;99(7):1463-9

1 Vanderbilt Transplant Center, Vanderbilt University Medical Center, Nashville, TN. 2 Department of Surgery, Vanderbilt University Medical Center, Nashville, TN. 3 University of Tennessee Health Science Center, Memphis, TN. 4 Division of Nephrology, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN. 5 Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN.

Background: Some patients referred for kidney transplant evaluation fail to attend the visit. Our goal was to compare demographic, socioeconomic, and psychologic factors between evaluation visit attendees and absentees.

Methods: A convenience sample of patients referred and scheduled for kidney transplant evaluation at a single center from November 2012 to December 2013 participated in a phone survey reporting socioeconomic, demographic, and clinical characteristics; health literacy; and perceived knowledge and concerns about transplantation. Absentees were matched by race with attendees. Analyses of differences between groups were performed with chi-square test, Fisher exact test, and t tests. Multivariable logistic regression was adjusted for relevant demographic characteristics.

Results: One hundred four adults participated (61% men, 46% white, 52 ± 12 years). Financial concerns were the most prevalent (67.3% affording medication, 64.1% affording operation). Previous evaluation at a different transplant center (P = 0.029) and being on dialysis (P = 0.008) were significantly associated with absence. Attendance was associated with concerns about finding a living donor (P = 0.038) and higher perceived general knowledge about transplantation (P ≤ 0.001). No differences were appreciated in demographic, socioeconomic, or health literacy factors between groups.

Conclusion: Both attendee and absentee patients were most concerned with the financial burden of kidney transplantation. Although concerns and perceived knowledge are important correlates of behavior, other considerations such as psychologic factors and prior medical experiences may influence patients' ability to complete the kidney transplant evaluation process. Although this pilot study was conducted in a small sample and has limited generalizability, our findings can guide future research.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/TP.0000000000000543DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4495014PMC
July 2015

Minority Underrepresentation in Academia: Factors Impacting Careers of Surgery Residents.

J Racial Ethn Health Disparities 2014 Dec;1(4):238-246

Department of Surgery, Vanderbilt University Medical Center, Nashville, TN ; Department of Thoracic Surgery, Vanderbilt University Medical Center, Nashville, TN ; Veterans Affairs Medical Center, Nashville, TN.

Background: Underrepresentation of minorities within academic surgery is an ever present problem with a profound impact on healthcare. The factors influencing surgery residents to pursue an academic career have yet to be formally investigated. We sought to elucidate these factors, with a focus on minority status.

Methods: A web-based questionnaire was sent to all administered to all ACGME-accredited general surgery programs in the United States. The main outcome was the decision to pursue a fully academic versus non-academic career. Multivariable logistic regression was used to identify characteristics impacting career choice.

Results: Of the 3,726 residents who received the survey, a total of 1,217 residents completed it - a response rate of 33%. Forty-seven percent planned to pursue non-academic careers, 35% academic careers, and 18% were undecided. There was no association between underrepresented minority status and academic career choice (Odds Ratio = 1.0, 95% Confidence Interval 0.6 - 1.6). Among all residents, research during training (OR=4.0, 95% CI 2.7-5.9), mentorship (OR=2.1, 95% CI 1.6-2.9), and attending a residency program requiring research (OR=2.3, 95% CI 1.5-3.4) were factors associated with choosing an academic career. When the analysis was performed among only senior residents (i.e., 4 and 5 year residents), a debt burden >$150,000 was associated with choosing a non-academic career (OR=0.4, 95% CI 0.1-0.9).

Conclusions: Underrepresented minority status is not associated with career choice. Intentional recruitment of minorities into research-oriented training programs, increased mentorship and research support among current minority residents, and improved financial options for minorities may increase the number choosing an academic surgical career.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s40615-014-0030-6DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4226069PMC
December 2014

Does renal dysfunction and method of bridging support influence heart transplant graft survival?

Ann Thorac Surg 2014 Sep 25;98(3):835-41. Epub 2014 Jul 25.

Division of Cardiovascular Surgery, Vanderbilt University Medical Center, Nashville, Tennessee. Electronic address:

Background: Renal insufficiency is common in status 1B patients supported with inotropes or a continuous flow left ventricular device (CF-LVAD) as a bridge to heart transplantation. We evaluated the association of renal function and inotrope versus CF-LVAD support on posttransplant graft survival in status 1B patients.

Methods: The Scientific Registry for Transplant Recipients database was analyzed for posttransplant survival in status 1B patients bridged with inotropes or CF-LVAD who underwent transplantation between 2003 and 2012. Pretransplant renal function was measured by estimating glomerular filtration rate (GFR) and was stratified as less than 45 mL · min(-1) · 1.73 m(-2), 45 to 59, and 60 or greater. Univariate Kaplan-Meier and multivariate Cox regression models were used to evaluate the main effects of GFR strata and inotropes versus CF-LVAD, and the interaction effect of GFR strata by CF-LVAD, on graft survival.

Results: This study included 4,158 status 1B patients (74% male, aged 53 ± 12 years). Of those, 659 patients had a CF-LVAD (HeartMate-II [Thoratec, Pleasanton, CA], n = 638; HVAD [HeartWare, Framingham, MA], n = 21), and 3,530 were receiving inotropes (31 CF-LVAD patients were also receiving inotropes). Kaplan-Meier analyses demonstrated reduced graft survival (p = 0.022) in patients with pretransplant GFR less than 45 versus GFR 45 to 59 (p = 0.062) and versus GFR 60 or greater (p = 0.007), and no effect of inotrope versus CF-LVAD support on graft survival (p = 0.402). Multivariate analysis demonstrated that, after adjusting for the main effects of GFR stratum, CF-LVAD, and inotropes, status 1B patients bridged with a CF-LVAD and GFR in the lowest stratum had reduced graft survival (interaction effect p = 0.040).

Conclusions: Pretransplant renal insufficiency was associated with reduced posttransplant graft survival in status 1B patients. This risk is increased for patients bridged with a CF-LVAD (versus inotropes) who have GFR in the lowest stratum.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.athoracsur.2014.05.059DOI Listing
September 2014

Increased minimum vein diameter on preoperative mapping with duplex ultrasound is associated with arteriovenous fistula maturation and secondary patency.

J Vasc Surg 2015 Jan 24;61(1):170-6. Epub 2014 Jul 24.

Vanderbilt Transplant Center, Vanderbilt University Medical Center, Nashville, Tenn; Department of Surgery, Vanderbilt University Medical Center, Nashville, Tenn. Electronic address:

Objective: Autogenous arteriovenous hemodialysis accesses (arteriovenous fistulas [AVFs]) are preferred for chronic hemodialysis access. Preoperative vein mapping by duplex ultrasound is recommended before AVF creation, but there are few data correlating vein diameter with postoperative outcomes. Also, vein diameter has not been included in prior predictive models of fistula maturation. This study aims to test whether preoperative vein diameter is associated with failure of AVF maturation and long-term (secondary) patency.

Methods: We performed a retrospective analysis of clinical variables of patients undergoing brachiobasilic or brachiocephalic AVF creation. Kaplan-Meier and multivariate Cox regression models tested whether preoperative minimum vein diameter (MVD) and clinical covariates were associated with failure of AVF maturation and secondary patency.

Results: The sample included 158 adults (54 ± 14 years; 45% male; 61% white; 56% diabetes; body mass index, 32 ± 8; MVD, 3.4 ± 1.1 mm; follow-up, 12 ± 9 months [range, <1-40 months]). Increased MVD was associated with decreased risk of AVF failure. More than one third of AVFs with MVD <2.7 mm failed to mature within 6 months. Multivariate models that adjusted for age, diabetes, race, gender, body mass index, and preoperative dialysis status demonstrated that increased MVD was associated with decreased risk of failure of maturation and better long-term patency overall (P = .005 and P = .001, respectively).

Conclusions: Patients with a larger MVD on preoperative vein mapping are at lower risk for failure of fistula maturation and have increased long-term AVF patency. MVD is the only clinical or demographic factor associated with both AVF maturation and long-term patency. MVD is an important preoperative indicator of fistula success in assessment of potential AVF sites. Future predictive models of fistula maturation and patency should include MVD.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jvs.2014.06.092DOI Listing
January 2015

Institutional volume of heart transplantation with left ventricular assist device explantation influences graft survival.

J Heart Lung Transplant 2014 Sep 15;33(9):931-6. Epub 2014 May 15.

Division of Cardiovascular Surgery, Vanderbilt University Medical Center, Nashville Tennessee. Electronic address:

Background: There are increasing numbers of patients undergoing orthotopic heart transplantation (OHT) with left ventricular assist device (LVAD) explantation (LVAD explant-OHT). We hypothesized that LVAD explant-OHT is a more challenging surgical procedure compared to OHT without LVAD explantation and that institutional LVAD explant-OHT procedural volume would be associated with post-transplant graft survival. We sought to assess the impact of institutional volume of LVAD explant-OHT on post-transplant graft survival.

Methods: This is a retrospective analysis of the Scientific Registry of Transplant Recipients for adult OHTs with long-term LVAD explantation. LVAD explant-OHT volume was characterized on the basis of the center's year-specific total OHT volume (OHTvol) and year-specific LVAD explant-OHT volume quartile (LVADvolQ). The effect of LVADvolQ on graft survival (death or re-transplantation) was analyzed.

Results: From 2004 to 2011, 2,681 patients underwent OHT with LVAD explantation (740 with HeartMate XVE, 1,877 with HeartMate II and 64 with HeartWare devices). LVAD explant-OHT at centers falling in the lowest LVADvolQ was associated with reduced post-transplant graft survival (p = 0.022). After adjusting for annualized OHTvol (HR = 0.998, 95% CI 0.993 to 1.003, p = 0.515 and pulsatile XVE (HR = 0.842, 95% CI 0.688 to 1.032, p = 0.098), multivariate analysis confirmed a significantly (approximately 37%) increased risk of post-transplant graft failure among explant-OHT procedures occurring at centers in the lowest volume quartile (HR = 1.371, 95% CI 1.030 to 1.825, p = 0.030).

Conclusion: Graft survival is decreased when performed at centers falling in the lowest quartile of LVAD explant-OHT for a given year. This volume-survival relationship should be considered in the context of limited donor organ availability and the rapidly growing number of LVAD centers.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.healun.2014.04.016DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4130762PMC
September 2014

Early weight regain after gastric bypass does not affect insulin sensitivity but is associated with elevated ghrelin.

Obesity (Silver Spring) 2014 Jul 29;22(7):1617-22. Epub 2014 Apr 29.

Department of Surgery, Vanderbilt University School of Medicine, Nashville, Tennessee.

Objectives: We sought to determine: (1) if early weight regain between 1 and 2 years after Roux-en-Y gastric bypass (RYGB) is associated with worsened hepatic and peripheral insulin sensitivity, and (2) if preoperative levels of ghrelin and leptin are associated with early weight regain after RYGB.

Methods: Hepatic and peripheral insulin sensitivity and ghrelin and leptin plasma levels were assessed longitudinally in 45 subjects before RYGB and at 1 month, 6 months, 1 year, and 2 years postoperatively. Weight regain was defined as ≥5% increase in body weight between 1 and 2 years after RYGB.

Results: Weight regain occurred in 33% of subjects, with an average increase in body weight of 10 ± 5% (8.5 ± 3.3 kg). Weight regain was not associated with worsening of peripheral or hepatic insulin sensitivity. Subjects with weight regain after RYGB had higher preoperative and postoperative levels of ghrelin compared to those who maintained or lost weight during this time. Conversely, the trajectories of leptin levels corresponded with the trajectories of fat mass in both groups.

Conclusions: Early weight regain after RYGB is not associated with a reversal of improvements in insulin sensitivity. Higher preoperative ghrelin levels might identify patients that are more susceptible to weight regain after RYGB.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/oby.20776DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4077938PMC
July 2014