Publications by authors named "Michael W Kattan"

538 Publications

Development and validation of pretreatment nomogram for disease-specific mortality in gastric cancer-A competing risk analysis.

Cancer Med 2021 Oct 10. Epub 2021 Oct 10.

Division of Gastric Surgery, Shizuoka Cancer Center, Shizuoka, Japan.

Background: In several reports, gastric cancer nomograms for predicting overall or disease-specific survival have been described. The American Joint Committee on Cancer (AJCC) introduced the attractiveness of disease-specific mortality (DSM) as an endpoint of risk model. This study aimed to develop the first pretreatment gastric cancer nomogram for predicting DSM that considers competing risks (CRs).

Methods: The prediction model was developed using data for 5231 gastric cancer patients. Fifteen prognosticators, which were registered at diagnosis, were evaluated. The nomogram for DSM was created as visualizations of the multivariable Fine and Gray regression model. An independent cohort for external validation consisted of 389 gastric cancer patients from a different institution. The performance of the model was assessed by discrimination (Harrell's concordance (C)-index), calibration, and decision curve analysis. DSM and CRs were evaluated, paying special attention to host-related factors such as age and Eastern Cooperative Oncology Group performance status (ECOG PS), by using Gray's univariable method.

Results: Fourteen prognostic factors were selected to develop the nomogram. The new nomogram for DSM exhibited good discrimination. Its C-index of 0.887 surpassed that of the American Joint Committee on Cancer (AJCC) clinical staging (0.794). The C-index was 0.713 (AJCC, 0.582) for the external validation cohort. The nomogram showed good performance internally and externally, in the calibration and decision curve analysis. Host-related factors including age and ECOG PS, were strongly correlated with competing risks.

Conclusions: The newly developed nomogram accurately predicts DSM, which can be used for patient counseling in clinical practice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/cam4.4279DOI Listing
October 2021

Variation in preoperative stress testing by patient, physician and surgical type: a cohort study.

BMJ Open 2021 09 27;11(9):e048052. Epub 2021 Sep 27.

Center for Value-based Care Research, Cleveland Clinic, Cleveland, Ohio, USA.

Objectives: To describe variation in and drivers of contemporary preoperative cardiac stress testing.

Setting: A dedicated preoperative risk assessment and optimisation clinic at a large integrated medical centre from 2008 through 2018.

Participants: A cohort of 118 552 adult patients seen by 104 physicians across 159 795 visits to a preoperative risk assessment and optimisation clinic.

Main Outcome: Referral for stress testing before major surgery, including nuclear, echocardiographic or electrocardiographic-only stress testing, within 30 days after a clinic visit.

Results: A total of 8303 visits (5.2%) resulted in referral for preoperative stress testing. Key patient factors associated with preoperative stress testing included predicted surgical risk, patient functional status, a previous diagnosis of ischaemic heart disease, tobacco use and body mass index. Patients living in either the most-deprived or least-deprived census block groups were more likely to be tested. Patients were tested more frequently before aortic, peripheral vascular or urologic interventions than before other surgical subcategories. Even after fully adjusting for patient and surgical factors, provider effects remained important: marginal testing rates differed by a factor-of-three in relative terms and around 2.5% in absolute terms between the 5th and 95th percentile physicians. Stress testing frequency decreased over the time period; controlling for patient and physician predictors, a visit in 2008 would have resulted in stress testing approximately 3.5% of the time, while a visit in 2018 would have resulted in stress testing approximately 1.3% of the time.

Conclusions: In this large cohort of patients seen for preoperative risk assessment at a single health system, decisions to refer patients for preoperative stress testing are influenced by various factors other than estimated perioperative risk and functional status, the key considerations in current guidelines. The frequency of preoperative stress testing has decreased over time, but remains highly provider-dependent.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1136/bmjopen-2020-048052DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8477322PMC
September 2021

Somatic mutations as preoperative predictors of metastases in patients with localized clear cell renal cell carcinoma - An exploratory analysis.

Urol Oncol 2021 Nov 25;39(11):791.e17-791.e24. Epub 2021 Sep 25.

Urology Service, Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, NY. Electronic address:

Objective: Recurrent genomic alterations in clear cell renal cell carcinoma (ccRCC) have been associated with treatment outcomes; however, current preoperative predictive models do not include known genetic predictors. We aimed to explore the value of common somatic mutations in the preoperative prediction of metastatic disease among patients treated for localized ccRCC.

Materials And Methods: After obtaining institutional review board approval, data of 254 patients with localized ccRCC treated between 2005 and 2015 who underwent genetic sequencing was collected. The mutation status of VHL, PBRM1, SETD2, BAP1 and KDM5C were evaluated in the nephrectomy tumor specimen, which served as a proxy for biopsy mutation status. The Raj et al. preoperative nomogram was used to predict the 12-year metastatic free probability (MFP). The study outcome was MFP; the relationship between MFP and mutation status was evaluated with Cox-regression models adjusting for the preoperative nomogram variables (age, gender, incidental presentation, lymphadenopathy, necrosis, and size).

Results: The study cohort included 188 males (74%) and 66 females (26%) with a median age of 58 years. VHL mutations were present in 152/254 patients (60%), PBRM1 in 91/254 (36%), SETD2 in 32/254 (13%), BAP1 in 19/254 (8%), and KDM5C in 19/254 (8%). Median follow-up for survivors was 8.1 years. Estimated 12-year MFP was 70% (95% CI: 63%-75%). On univariable analysis SETD2 (HR: 3.30), BAP1 (HR: 2.44) and PBRM1 (HR: 1.78) were significantly associated with a higher risk of metastases. After adjusting for known preoperative predictors in the existing nomogram, SETD2 mutations remained associated with a higher rate of metastases after nephrectomy (HR: 2.09, 95% CI: 1.19-3.67, P = 0.011).

Conclusion: In the current exploratory analysis, SETD2 mutations were significant predictors of MFP among patients treated for localized ccRCC. Our findings support future studies evaluating genetic alterations in preoperative renal biopsy samples as potential predictors of treatment outcome.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.urolonc.2021.08.018DOI Listing
November 2021

Cardiovascular Outcomes in Patients With Type 2 Diabetes and Obesity: Comparison of Gastric Bypass, Sleeve Gastrectomy, and Usual Care.

Diabetes Care 2021 Nov 9;44(11):2552-2563. Epub 2021 Sep 9.

Department of Surgery, The Ohio State University Wexner Medical Center, Columbus, OH.

Objective: To determine which one of the two most common metabolic surgical procedures is associated with greater reduction in risk of major adverse cardiovascular events (MACE) in patients with type 2 diabetes mellitus (T2DM) and obesity.

Research Design And Methods: A total of 13,490 patients including 1,362 Roux-en-Y gastric bypass (RYGB), 693 sleeve gastrectomy (SG), and 11,435 matched nonsurgical patients with T2DM and obesity who received their care at the Cleveland Clinic (1998-2017) were analyzed, with follow-up through December 2018. With multivariable Cox regression analysis we estimated time to incident extended MACE, defined as first occurrence of coronary artery events, cerebrovascular events, heart failure, nephropathy, atrial fibrillation, and all-cause mortality.

Results: The cumulative incidence of the primary end point at 5 years was 13.7% (95% CI 11.4-15.9) in the RYGB groups and 24.7% (95% CI 19.0-30.0) in the SG group, with an adjusted hazard ratio (HR) of 0.77 (95% CI 0.60-0.98, = 0.04). Of the six individual end points, RYGB was associated with a significantly lower cumulative incidence of nephropathy at 5 years compared with SG (2.8% vs. 8.3%, respectively; HR 0.47 [95% CI 0.28-0.79], = 0.005). Furthermore, RYGB was associated with a greater reduction in body weight, glycated hemoglobin, and use of medications to treat diabetes and cardiovascular diseases. Five years after RYGB, patients required more upper endoscopy (45.8% vs. 35.6%, < 0.001) and abdominal surgical procedures (10.8% vs. 5.4%, = 0.001) compared with SG.

Conclusions: In patients with obesity and T2DM, RYGB may be associated with greater weight loss, better diabetes control, and lower risk of MACE and nephropathy compared with SG.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2337/dc20-3023DOI Listing
November 2021

Associations of weight loss with obesity-related comorbidities in a large integrated health system.

Diabetes Obes Metab 2021 Sep 2. Epub 2021 Sep 2.

Department of Quantitative Health Sciences, Lerner Research Institute, Cleveland Clinic, Cleveland, Ohio, USA.

Aims: To determine the health outcomes associated with weight loss in individuals with obesity, and to better understand the relationship between disease burden (disease burden; ie, prior comorbidities, healthcare utilization) and weight loss in individuals with obesity by analysing electronic health records (EHRs).

Materials And Methods: We conducted a case-control study using deidentified EHR-derived information from 204 921 patients seen at the Cleveland Clinic between 2000 and 2018. Patients were aged ≥20 years with body mass index ≥30 kg/m and had ≥7 weight measurements, over ≥3 years. Thirty outcomes were investigated, including chronic and acute diseases, as well as psychological and metabolic disorders. Weight change was investigated 3, 5 and 10 years prior to an event.

Results: Weight loss was associated with reduced incidence of many outcomes (eg, type 2 diabetes, nonalcoholic steatohepatitis/nonalcoholic fatty liver disease, obstructive sleep apnoea, hypertension; P < 0.05). Weight loss >10% was associated with increased incidence of certain outcomes including stroke and substance abuse. However, many outcomes that increased with weight loss were attenuated by disease burden adjustments.

Conclusions: This study provides the most comprehensive real-world evaluation of the health impacts of weight change to date. After comorbidity burden and healthcare utilization adjustments, weight loss was associated with an overall reduction in risk of many adverse outcomes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/dom.14538DOI Listing
September 2021

Protective heterologous T cell immunity in COVID-19 induced by the trivalent MMR and Tdap vaccine antigens.

Med (N Y) 2021 Sep 14;2(9):1050-1071.e7. Epub 2021 Aug 14.

Department of Pathology, Brigham and Women's Hospital & Harvard Medical School, Boston, MA 02115, USA.

Background: T cells control viral infection, promote vaccine durability, and in coronavirus disease 2019 (COVID-19) associate with mild disease. We investigated whether prior measles-mumps-rubella (MMR) or tetanus-diphtheria-pertussis (Tdap) vaccination elicits cross-reactive T cells that mitigate COVID-19.

Methods: Antigen-presenting cells (APC) loaded with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), MMR, or Tdap antigens and autologous T cells from COVID-19-convalescent participants, uninfected individuals, and COVID-19 mRNA-vaccinated donors were co-cultured. T cell activation and phenotype were detected by interferon-γ (IFN-γ) enzyme-linked immunospot (ELISpot) assays and flow cytometry. ELISAs (enzyme-linked immunosorbant assays) and validation studies identified the APC-derived cytokine(s) driving T cell activation. TCR clonotyping and single-cell RNA sequencing (scRNA-seq) identified cross-reactive T cells and their transcriptional profile. A propensity-weighted analysis of COVID-19 patients estimated the effects of MMR and Tdap vaccination on COVID-19 outcomes.

Findings: High correlation was observed between T cell responses to SARS-CoV-2 (spike-S1 and nucleocapsid) and MMR and Tdap proteins in COVID-19-convalescent and -vaccinated individuals. The overlapping T cell population contained an effector memory T cell subset (effector memory re-expressing CD45RA on T cells [T]) implicated in protective, anti-viral immunity, and their detection required APC-derived IL-15, known to sensitize T cells to activation. Cross-reactive TCR repertoires detected in antigen-experienced T cells recognizing SARS-CoV-2, MMR, and Tdap epitopes had T features. Indices of disease severity were reduced in MMR- or Tdap-vaccinated individuals by 32%-38% and 20%-23%, respectively, among COVID-19 patients.

Conclusions: Tdap and MMR memory T cells reactivated by SARS-CoV-2 may provide protection against severe COVID-19.

Funding: This study was supported by a National Institutes of Health (R01HL065095, R01AI152522, R01NS097719) donation from Barbara and Amos Hostetter and the Chleck Foundation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.medj.2021.08.004DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8363466PMC
September 2021

Pan-cancer prediction of radiotherapy benefit using genomic-adjusted radiation dose (GARD): a cohort-based pooled analysis.

Lancet Oncol 2021 09 4;22(9):1221-1229. Epub 2021 Aug 4.

Department of Radiation Oncology, Moffitt Cancer Center, Tampa, FL, USA; Department of Oncologic Sciences, University of South Florida College of Medicine, Tampa, FL, USA. Electronic address:

Background: Despite advances in cancer genomics, radiotherapy is still prescribed on the basis of an empirical one-size-fits-all paradigm. Previously, we proposed a novel algorithm using the genomic-adjusted radiation dose (GARD) model to personalise prescription of radiation dose on the basis of the biological effect of a given physical dose of radiation, calculated using individual tumour genomics. We hypothesise that GARD will reveal interpatient heterogeneity associated with opportunities to improve outcomes compared with physical dose of radiotherapy alone. We aimed to test this hypothesis and investigate the GARD-based radiotherapy dosing paradigm.

Methods: We did a pooled, pan-cancer analysis of 11 previously published clinical cohorts of unique patients with seven different types of cancer, which are all available cohorts with the data required to calculate GARD, together with clinical outcome. The included cancers were breast cancer, head and neck cancer, non-small-cell lung cancer, pancreatic cancer, endometrial cancer, melanoma, and glioma. Our dataset comprised 1615 unique patients, of whom 1298 (982 with radiotherapy, 316 without radiotherapy) were assessed for time to first recurrence and 677 patients (424 with radiotherapy and 253 without radiotherapy) were assessed for overall survival. We analysed two clinical outcomes of interest: time to first recurrence and overall survival. We used Cox regression, stratified by cohort, to test the association between GARD and outcome with separate models using dose of radiation and sham-GARD (ie, patients treated without radiotherapy, but modelled as having a standard-of-care dose of radiotherapy) for comparison. We did interaction tests between GARD and treatment (with or without radiotherapy) using the Wald statistic.

Findings: Pooled analysis of all available data showed that GARD as a continuous variable is associated with time to first recurrence (hazard ratio [HR] 0·98 [95% CI 0·97-0·99]; p=0·0017) and overall survival (0·97 [0·95-0·99]; p=0·0007). The interaction test showed the effect of GARD on overall survival depends on whether or not that patient received radiotherapy (Wald statistic p=0·011). The interaction test for GARD and radiotherapy was not significant for time to first recurrence (Wald statistic p=0·22). The HR for physical dose of radiation was 0·99 (95% CI 0·97-1·01; p=0·53) for time to first recurrence and 1·00 (0·96-1·04; p=0·95) for overall survival. The HR for sham-GARD was 1·00 (0·97-1·03; p=1·00) for time to first recurrence and 1·00 (0·98-1·02; p=0·87) for overall survival.

Interpretation: The biological effect of radiotherapy, as quantified by GARD, is significantly associated with time to first recurrence and overall survival for patients with cancer treated with radiation. It is predictive of radiotherapy benefit, and physical dose of radiation is not. We propose integration of genomics into radiation dosing decisions, using a GARD-based framework, as the new paradigm for personalising radiotherapy prescription dose.

Funding: None. VIDEO ABSTRACT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/S1470-2045(21)00347-8DOI Listing
September 2021

Mechanisms of socioeconomic differences in COVID-19 screening and hospitalizations.

PLoS One 2021 5;16(8):e0255343. Epub 2021 Aug 5.

Cleveland Clinic Lerner College of Medicine at Case Western Reserve University, Cleveland, Ohio.

Background: Social and ecological differences in early SARS-CoV-2 pandemic screening and outcomes have been documented, but the means by which these differences have arisen are not well understood.

Objective: To characterize socioeconomic and chronic disease-related mechanisms underlying these differences.

Design: Observational cohort study.

Setting: Outpatient and emergency care.

Patients: 12900 Cleveland Clinic Health System patients referred for SARS-CoV-2 testing between March 17 and April 15, 2020.

Interventions: Nasopharyngeal PCR test for SARS-CoV-2 infection.

Measurements: Test location (emergency department, ED, vs. outpatient care), COVID-19 symptoms, test positivity and hospitalization among positive cases.

Results: We identified six classes of symptoms, ranging in test positivity from 3.4% to 23%. Non-Hispanic Black race/ethnicity was disproportionately represented in the group with highest positivity rates. Non-Hispanic Black patients ranged from 1.81 [95% confidence interval: 0.91-3.59] times (at age 20) to 2.37 [1.54-3.65] times (at age 80) more likely to test positive for the SARS-CoV-2 virus than non-Hispanic White patients, while test positivity was not significantly different across the neighborhood income spectrum. Testing in the emergency department (OR: 5.4 [3.9, 7.5]) and cardiovascular disease (OR: 2.5 [1.7, 3.8]) were related to increased risk of hospitalization among the 1247 patients who tested positive.

Limitations: Constraints on availability of test kits forced providers to selectively test for SARS-Cov-2.

Conclusion: Non-Hispanic Black patients and patients from low-income neighborhoods tended toward more severe and prolonged symptom profiles and increased comorbidity burden. These factors were associated with higher rates of testing in the ED. Non-Hispanic Black patients also had higher test positivity rates.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0255343PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8341486PMC
August 2021

Improving the prediction of epilepsy surgery outcomes using basic scalp EEG findings.

Epilepsia 2021 Oct 2;62(10):2439-2450. Epub 2021 Aug 2.

Epilepsy Center, Cleveland Clinic Foundation, Cleveland, Ohio, USA.

Objective: This study aims to evaluate the role of scalp electroencephalography (EEG; ictal and interictal patterns) in predicting resective epilepsy surgery outcomes. We use the data to further develop a nomogram to predict seizure freedom.

Methods: We retrospectively reviewed the scalp EEG findings and clinical data of patients who underwent surgical resection at three epilepsy centers. Using both EEG and clinical variables categorized into 13 isolated candidate predictors and 6 interaction terms, we built a multivariable Cox proportional hazards model to predict seizure freedom 2 years after surgery. Harrell's step-down procedure was used to sequentially eliminate the least-informative variables from the model until the change in the concordance index (c-index) with variable removal was less than 0.01. We created a separate model using only clinical variables. Discrimination of the two models was compared to evaluate the role of scalp EEG in seizure-freedom prediction.

Results: Four hundred seventy patient records were analyzed. Following internal validation, the full Clinical + EEG model achieved an optimism-corrected c-index of 0.65, whereas the c-index of the model without EEG data was 0.59. The presence of focal to bilateral tonic-clonic seizures (FBTCS), high preoperative seizure frequency, absence of hippocampal sclerosis, and presence of nonlocalizable seizures predicted worse outcome. The presence of FBTCS had the largest impact for predicting outcome. The analysis of the models' interactions showed that in patients with unilateral interictal epileptiform discharges (IEDs), temporal lobe surgery cases had a better outcome. In cases with bilateral IEDs, abnormal magnetic resonance imaging (MRI) predicted worse outcomes, and in cases without IEDs, patients with extratemporal epilepsy and abnormal MRI had better outcomes.

Significance: This study highlights the value of scalp EEG, particularly the significance of IEDs, in predicting surgical outcome. The nomogram delivers an individualized prediction of postoperative outcome, and provides a unique assessment of the relationship between the outcome and preoperative findings.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/epi.17024DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8488002PMC
October 2021

Derivation and Validation of the Critical Bronchiolitis Score for the PICU.

Pediatr Crit Care Med 2021 Jul 14. Epub 2021 Jul 14.

Division of Pediatric Critical Care Medicine, UH Rainbow Babies and Children's Hospital, Cleveland, OH. Department of Quantitative Health Sciences, Lerner Research Institute, Cleveland Clinic, Cleveland, OH. Department of Pediatrics, Case Western Reserve University School of Medicine, Cleveland, OH.

Objectives: To derive and internally validate a bronchiolitis-specific illness severity score (the Critical Bronchiolitis Score) that out-performs mortality-based illness severity scores (e.g., Pediatric Risk of Mortality) in measuring expected duration of respiratory support and PICU length of stay for critically ill children with bronchiolitis.

Design: Retrospective database study using the Virtual Pediatric Systems (VPS, LLC; Los Angeles, CA) database.

Setting: One-hundred twenty-eight North-American PICUs.

Patients: Fourteen-thousand four-hundred seven children less than 2 years old admitted to a contributing PICU with primary diagnosis of bronchiolitis and use of ICU-level respiratory support (defined as high-flow nasal cannula, noninvasive ventilation, invasive mechanical ventilation, or negative pressure ventilation) at 12 hours after PICU admission.

Interventions: Patient-level variables available at 12 hours from PICU admission, duration of ICU-level respiratory support, and PICU length of stay data were extracted for analysis. After randomly dividing the cohort into derivation and validation groups, patient-level variables that were significantly associated with the study outcomes were selected in a stepwise backward fashion for inclusion in the final score. Score performance in the validation cohort was assessed using root mean squared error and mean absolute error, and performance was compared with that of existing PICU illness severity scores.

Measurements And Main Results: Twelve commonly available patient-level variables were included in the Critical Bronchiolitis Score. Outcomes calculated with the score were similar to actual outcomes in the validation cohort. The Critical Bronchiolitis Score demonstrated a statistically significantly stronger association with duration of ICU-level respiratory support and PICU length of stay than mortality-based scores as measured by root mean squared error and mean absolute error.

Conclusions: The Critical Bronchiolitis Score performed better than PICU mortality-based scores in measuring expected duration of ICU-level respiratory support and ICU length of stay. This score may have utility to enrich interventional trials and adjust for illness severity in observational studies in this very common PICU condition.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/PCC.0000000000002808DOI Listing
July 2021

International Multi-Site Initiative to Develop an MRI-Inclusive Nomogram for Side-Specific Prediction of Extraprostatic Extension of Prostate Cancer.

Cancers (Basel) 2021 May 27;13(11). Epub 2021 May 27.

Clínica Girona, Institute Catalan of Health-IDI, University of Girona, 17004 Girona, Spain.

Background: To develop an international, multi-site nomogram for side-specific prediction of extraprostatic extension (EPE) of prostate cancer based on clinical, biopsy, and magnetic resonance imaging- (MRI) derived data.

Methods: Ten institutions from the USA and Europe contributed clinical and side-specific biopsy and MRI variables of consecutive patients who underwent prostatectomy. A logistic regression model was used to develop a nomogram for predicting side-specific EPE on prostatectomy specimens. The performance of the statistical model was evaluated by bootstrap resampling and cross validation and compared with the performance of benchmark models that do not incorporate MRI findings.

Results: Data from 840 patients were analyzed; pathologic EPE was found in 320/840 (31.8%). The nomogram model included patient age, prostate-specific antigen density, side-specific biopsy data (i.e., Gleason grade group, percent positive cores, tumor extent), and side-specific MRI features (i.e., presence of a PI-RADSv2 4 or 5 lesion, level of suspicion for EPE, length of capsular contact). The area under the receiver operating characteristic curve of the new, MRI-inclusive model (0.828, 95% confidence limits: 0.805, 0.852) was significantly higher than that of any of the benchmark models ( < 0.001 for all).

Conclusions: In an international, multi-site study, we developed an MRI-inclusive nomogram for the side-specific prediction of EPE of prostate cancer that demonstrated significantly greater accuracy than clinical benchmark models.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/cancers13112627DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8198352PMC
May 2021

Response to: Noncancer Cells in Tumor Samples May Bias the Predictive Genomically Adjusted Radiation Dose.

J Thorac Oncol 2021 06;16(6):e48-e49

Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida. Electronic address:

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jtho.2021.03.020DOI Listing
June 2021

Nomograms to Predict Verbal Memory Decline After Temporal Lobe Resection in Adults With Epilepsy.

Neurology 2021 May 19. Epub 2021 May 19.

Epilepsy Center and.

Objective: To develop and externally validate models to predict the probability of postoperative verbal memory decline in adults following temporal lobe resection (TLR) for epilepsy using easily-accessible preoperative clinical predictors.

Methods: Multivariable models were developed to predict delayed verbal memory outcome on three commonly used measures: Rey Auditory Verbal Learning Test (RAVLT) and Logical Memory (LM) and Verbal Paired Associates (VPA) subtests from Wechsler Memory Scale-Third Edition. Using Harrell's step-down procedure for variable selection, models were developed in 359 adults who underwent TLR at Cleveland Clinic and validated in 290 adults at one of five epilepsy surgery centers in the United States or Canada.

Results: Twenty-nine percent of the development cohort and 26% of the validation cohort demonstrated significant decline on at least one verbal memory measure. Initial models had good to excellent predictive accuracy (calibration (c) statistic range=0.77-0.80) in identifying patients with memory decline; however, models slightly underestimated decline in the validation cohort. Model coefficients were updated using data from both cohorts to improve stability. The model for RAVLT included surgery side, baseline memory score, and hippocampal resection. The models for LM and VPA included surgery side, baseline score, and education. Updated model performance was good to excellent (RAVLT c=0.81, LM c=0.76, VPA c=0.78). Model calibration was very good, indicating no systematic over- or under-estimation of risk.

Conclusions: Nomograms are provided in two easy-to-use formats to assist clinicians in estimating the probability of verbal memory decline in adults considering TLR for treatment of epilepsy.

Classification Of Evidence: This study provides Class II evidence that multivariable prediction models accurately predict verbal memory decline after temporal lobe resection for epilepsy in adults.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1212/WNL.0000000000012221DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8302146PMC
May 2021

Protective heterologous T cell immunity in COVID-19 induced by MMR and Tdap vaccine antigens.

bioRxiv 2021 May 4. Epub 2021 May 4.

T cells are critical for control of viral infection and effective vaccination. We investigated whether prior Measles-Mumps-Rubella (MMR) or Tetanus-Diphtheria-pertussis (Tdap) vaccination elicit cross-reactive T cells that mitigate COVID-19. Using co-cultures of antigen presenting cells (APC) loaded with antigens and autologous T cells, we found a high correlation between responses to SARS-CoV-2 (Spike-S1 and Nucleocapsid) and MMR and Tdap vaccine proteins in both SARS-CoV-2 infected individuals and individuals immunized with mRNA-based SARS-CoV-2 vaccines. The overlapping T cell population contained effector memory T cells (TEMRA) previously implicated in anti-viral immunity and their activation required APC-derived IL-15. TCR- and scRNA-sequencing detected cross-reactive clones with TEMRA features among the cells recognizing SARS-CoV-2, MMR and Tdap epitopes. A propensity-weighted analysis of 73,582 COVID-19 patients revealed that severe disease outcomes (hospitalization and transfer to intensive care unit or death) were reduced in MMR or Tdap vaccinated individuals by 38-32% and 23-20% respectively. In summary, SARS-CoV-2 re-activates memory T cells generated by Tdap and MMR vaccines, which may reduce disease severity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1101/2021.05.03.441323DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8109200PMC
May 2021

Validation of a Risk Calculator to Personalize Graft Choice and Reduce Rupture Rates for Anterior Cruciate Ligament Reconstruction.

Am J Sports Med 2021 06 4;49(7):1777-1785. Epub 2021 May 4.

Faculty of Health Sciences, Western University, London, Ontario, Canada.

Background: Anterior cruciate ligament reconstructions (ACLRs) fail at an alarmingly high rate in young active individuals. The Multicenter Orthopaedic Outcomes Network (MOON) knee group has developed an autograft risk calculator that uses patient characteristics and lifestyle to predict the probability of graft rupture if the surgeon uses a hamstring tendon (HT) or a bone-patellar tendon-bone (BPTB) graft to reconstruct the ligament. If validated, this risk calculator can be used during the shared decision-making process to make optimal ACLR autograft choices and reduce rupture rates. The STABILITY 1 randomized clinical trial offers a large, rigorously collected data set of similar young active patients who received HT autograft with or without lateral extra-articular tenodesis (LET) for ACLR.

Purpose/hypothesis: The purpose was to validate the ACLR graft rupture risk calculator in a large external data set and to investigate the utility of BPTB and LET for ACLR. We hypothesized that the risk calculator would maintain adequate discriminative ability and calibration in the external STABILITY 1 data set when compared with the initial MOON development data set.

Study Design: Cohort study (diagnosis); Level of evidence, 1.

Methods: The model predictors for the risk calculator include age, sex, body mass index, sport played at the time of injury, Marx Activity Score, preoperative knee laxity, and graft type. The STABILITY 1 trial data set was used for external validation. Discriminative ability, calibration, and diagnostic test validity of the model were assessed. Finally, predictor strength in the initial and validation samples was compared.

Results: The model showed acceptable discriminative ability (area under the curve = 0.73), calibration (Brier score = 0.07), and specificity (85.3%) to detect patients who will experience a graft rupture. Age, high-grade preoperative knee laxity, and graft type were significant predictors of graft rupture in young active patients. BPTB and the addition of LET to HT were protective against graft rupture versus HT autograft alone.

Conclusion: The MOON risk calculator is a valid predictor of ACLR graft rupture and is appropriate for clinical practice. This study provides evidence supporting the idea that isolated HT autografts should be avoided for young active patients undergoing ACLR.

Registration: NCT00463099 (MOON); NCT02018354 (STABILITY 1) (ClinicalTrials.gov identifiers).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/03635465211010798DOI Listing
June 2021

Letter Response.

J Thorac Oncol 2021 05;16(5):e28-e29

Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jtho.2021.02.027DOI Listing
May 2021

Type 2 Diabetes Subtype Responsive to ACCORD Intensive Glycemia Treatment.

Diabetes Care 2021 Apr 16. Epub 2021 Apr 16.

Department of Quantitative Health Sciences, Lerner Research Institute, Cleveland Clinic, Cleveland, OH

Objective: Current type 2 diabetes (T2D) management contraindicates intensive glycemia treatment in patients with high cardiovascular disease (CVD) risk and is partially motivated by evidence of harms in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial. Heterogeneity in response to intensive glycemia treatment has been observed, suggesting potential benefit for some individuals.

Research Design And Methods: ACCORD was a randomized controlled trial that investigated whether intensively treating glycemia in individuals with T2D would reduce CVD outcomes. Using a novel approach to cluster HbA trajectories, we identified groups in the intensive glycemia arm with modified CVD risk. Genome-wide analysis and polygenic score (PS) were developed to predict group membership. Mendelian randomization was performed to infer causality.

Results: We identified four clinical groupings in the intensive glycemia arm, and clinical group 4 (C4) displayed fewer CVD (hazard ratio [HR] 0.34; 2.01 × 10) and microvascular outcomes (HR 0.86; 0.015) than those receiving standard treatment. A single-nucleotide polymorphism, rs220721, in reached suggestive significance in C4 ( 4.34 10). PS predicted C4 with high accuracy (area under the receiver operating characteristic curve 0.98), and this predicted C4 displayed reduced CVD risk with intensive versus standard glycemia treatment (HR 0.53; 4.02 × 10), but not reduced risk of microvascular outcomes ( 0.05). Mendelian randomization indicated causality between PS, on-trial HbA, and reduction in CVD outcomes ( 0.05).

Conclusions: We found evidence of a T2D clinical group in ACCORD that benefited from intensive glycemia treatment, and membership in this group could be predicted using genetic variants. This study generates new hypotheses with implications for precision medicine in T2D and represents an important development in this landmark clinical trial warranting further investigation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2337/dc20-2700DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8247498PMC
April 2021

A first step towards a global nomogram to predict disease progression for men on active surveillance.

Transl Androl Urol 2021 Mar;10(3):1102-1109

Department of Quantitative Health Sciences, Cleveland Clinic, Cleveland, OH, USA.

Background: Signs of disease progression (28%) and conversion to active treatment without evidence of disease progression (13%) are the main reasons for discontinuation of active surveillance (AS) in men with localised prostate cancer (PCa). We aimed to develop a nomogram to predict disease progression in these patients.

Methods: As a first step in the development of a nomogram, using data from Movembers' GAP3 Consortium (n=14,380), we assessed heterogeneity between centres in terms of risk of disease progression. We started with assessment of baseline hazards for disease progression based on grouping of centres according to follow-up protocols [high: yearly; intermediate: ~2 yearly; and low: at year 1, 4 & 7 (i.e., PRIAS)]. We conducted cause-specific random effect Cox proportional hazards regression to estimate risk of disease progression by centre in each group.

Results: Disease progression rates varied substantially between centres [median hazard ratio (MHR): 2.5]. After adjustment for various clinical factors (age, year of diagnosis, Gleason grade group, number of positive cores and PSA), substantial heterogeneity in disease progression remained between centres.

Conclusions: When combining worldwide data on AS, we noted unexplained differences of disease progression rate even after adjustment for various clinical factors. This suggests that when developing a global nomogram, local adjustments for differences in risk of disease progression and competing outcomes such as conversion to active treatment need to be considered.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.21037/tau-20-1082DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8039580PMC
March 2021

Derivation and validation of a machine learning risk score using biomarker and electronic patient data to predict progression of diabetic kidney disease.

Diabetologia 2021 Jul 2;64(7):1504-1515. Epub 2021 Apr 2.

Department of Surgery, Perelman School of Medicine at University of Pennsylvania, Philadelphia, PA, USA.

Aim: Predicting progression in diabetic kidney disease (DKD) is critical to improving outcomes. We sought to develop/validate a machine-learned, prognostic risk score (KidneyIntelX™) combining electronic health records (EHR) and biomarkers.

Methods: This is an observational cohort study of patients with prevalent DKD/banked plasma from two EHR-linked biobanks. A random forest model was trained, and performance (AUC, positive and negative predictive values [PPV/NPV], and net reclassification index [NRI]) was compared with that of a clinical model and Kidney Disease: Improving Global Outcomes (KDIGO) categories for predicting a composite outcome of eGFR decline of ≥5 ml/min per year, ≥40% sustained decline, or kidney failure within 5 years.

Results: In 1146 patients, the median age was 63 years, 51% were female, the baseline eGFR was 54 ml min [1.73 m], the urine albumin to creatinine ratio (uACR) was 6.9 mg/mmol, follow-up was 4.3 years and 21% had the composite endpoint. On cross-validation in derivation (n = 686), KidneyIntelX had an AUC of 0.77 (95% CI 0.74, 0.79). In validation (n = 460), the AUC was 0.77 (95% CI 0.76, 0.79). By comparison, the AUC for the clinical model was 0.62 (95% CI 0.61, 0.63) in derivation and 0.61 (95% CI 0.60, 0.63) in validation. Using derivation cut-offs, KidneyIntelX stratified 46%, 37% and 17% of the validation cohort into low-, intermediate- and high-risk groups for the composite kidney endpoint, respectively. The PPV for progressive decline in kidney function in the high-risk group was 61% for KidneyIntelX vs 40% for the highest risk strata by KDIGO categorisation (p < 0.001). Only 10% of those scored as low risk by KidneyIntelX experienced progression (i.e., NPV of 90%). The NRI for the high-risk group was 41% (p < 0.05).

Conclusions: KidneyIntelX improved prediction of kidney outcomes over KDIGO and clinical models in individuals with early stages of DKD.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00125-021-05444-0DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8187208PMC
July 2021

Development and Internal Validation of A Prediction Tool To Assist Clinicians Selecting Second-Line Therapy Following Metformin Monotherapy For Type 2 Diabetes.

Endocr Pract 2021 Apr 15;27(4):334-341. Epub 2020 Dec 15.

Quantitative Health Sciences, Cleveland Clinic, Cleveland, Ohio.

Objective: Adults with type 2 diabetes (T2D) face increased risk of many long-term adverse outcomes. While managing patients with T2D, clinicians are challenged to stay informed regarding all new therapies and must consider potential risks and benefits resultant to their use. Metformin (MET) is typically prescribed as first-line therapy, but a second line is often needed, given MET can be insufficient for maintaining long-term glycemic control. Our objective was to develop a predictive decision-making tool to help clinicians use an outcome-based approach to select second-line therapies for patients when MET monotherapy is insufficient for glycemic control.

Methods: Electronic health records of 19 277 adults with T2D on MET monotherapy and ≥3 months of either GLP-1RA, DPP-4i, Insulin, SGLT-2i, SFU, or TZD therapy were reviewed at Cleveland Clinic from patient visits occurring between 2005 and 2019. Separate models were developed to predict likelihood of each main outcome measure (stroke, myocardial infarction, worsening hypertension, renal failure, and death). Discrimination and calibration were assessed with bootstrapping.

Results: The median follow-up time for those without an event was 3.6 years (interquartile range 1.9, 6.3). Model discrimination ability was evaluated by concordance indices (goodness of fit metric with values ranging between 0 and 1: 1 indicates perfect discrimination ability; 0.5 reflects same discrimination ability as chance) demonstrating strong discrimination ability, with concordance index values for outcomes as follows: myocardial infarction (0.786), stroke (0.805), worsening hypertension (0.855), renal failure (0.808), and death (0.827).

Conclusion: A decision-making tool has been developed that may afford clinicians a more objective and individualized approach to choosing a second-line therapy to control glycemia for persons with T2D.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.eprac.2020.10.015DOI Listing
April 2021

Random forest swarm optimization-based for heart diseases diagnosis.

J Biomed Inform 2021 03 1;115:103690. Epub 2021 Feb 1.

Department of Quantitative Health Sciences, Cleveland Cancer Foundation, Cleveland, OH, United States.

Heart disease has been one of the leading causes of death worldwide in recent years. Among diagnostic methods for heart disease, angiography is one of the most common methods, but it is costly and has side effects. Given the difficulty of heart disease prediction, data mining can play an important role in predicting heart disease accurately. In this paper, by combining the multi-objective particle swarm optimization (MOPSO) and Random Forest, a new approach is proposed to predict heart disease. The main goal is to produce diverse and accurate decision trees and determine the (near) optimal number of them simultaneously. In this method, an evolutionary multi-objective approach is used instead of employing a commonly used approach, i.e., bootstrap, feature selection in the Random Forest, and random number selection of training sets. By doing so, different training sets with different samples and features for training each tree are generated. Also, the obtained solutions in Pareto-optimal fronts determine the required number of training sets to build the random forest. By doing so, the random forest's performance can be enhanced, and consequently, the prediction accuracy will be improved. The proposed method's effectiveness is investigated by comparing its performance over six heart datasets with individual and ensemble classifiers. The results suggest that the proposed method with the (near) optimal number of classifiers outperforms the random forest algorithm with different classifiers.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jbi.2021.103690DOI Listing
March 2021

Revisiting a Null Hypothesis: Exploring the Parameters of Oligometastasis Treatment.

Int J Radiat Oncol Biol Phys 2021 06 21;110(2):371-381. Epub 2021 Jan 21.

Translational Hematology and Oncology Research, Cleveland Clinic, Cleveland, Ohio; Systems Biology and Bioinformatics Program, Department of Nutrition, Case Western Reserve School of Medicine, Cleveland, Ohio; Department of Radiation Oncology, Cleveland Clinic, Cleveland, Ohio. Electronic address:

Purpose: In the treatment of patients with metastatic cancer, the current paradigm states that metastasis-directed therapy does not prolong life. This paradigm forms the basis of clinical trial null hypotheses, where trials are built to test the null hypothesis that patients garner no overall survival benefit from targeting metastatic lesions. However, with advancing imaging technology and increasingly precise techniques for targeting lesions, a much larger proportion of metastatic disease can be treated. As a result, the life-extending benefit of targeting metastatic disease is becoming increasingly clear.

Methods And Materials: In this work, we suggest shifting this qualitative null hypothesis and describe a mathematical model that can be used to frame a new, quantitative null. We begin with a very simple formulation of tumor growth, an exponential function, and illustrate how the same intervention (removing a given number of cells from the tumor) at different times affects survival. Additionally, we postulate where recent clinical trials fit into this parameter space and discuss the implications of clinical trial design in changing these quantitative parameters.

Results: Our model shows that although any amount of cell kill will extend survival, in many cases the extent is so small as to be unnoticeable in a clinical context or is outweighed by factors related to toxicity and treatment time.

Conclusions: Recasting the null in these quantitative terms will allow trialists to design trials specifically to increase understanding of the circumstances (patient selection, disease burden, tumor growth kinetics) that can lead to improved overall survival when targeting metastatic lesions, rather than whether targeting metastases extends survival for patients with (oligo-) metastatic disease.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ijrobp.2020.12.044DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8122026PMC
June 2021

Developing a Clinical Prediction Score: Comparing Prediction Accuracy of Integer Scores to Statistical Regression Models.

Anesth Analg 2021 06;132(6):1603-1613

Department of Quantitative Health Sciences, Lerner Research Institute, Cleveland Clinic, Cleveland, Ohio.

Researchers often convert prediction tools built on statistical regression models into integer scores and risk classification systems in the name of simplicity. However, this workflow discards useful information and reduces prediction accuracy. We, therefore, investigated the impact on prediction accuracy when researchers simplify a regression model into an integer score using a simulation study and an example clinical data set. Simulated independent training and test sets (n = 1000) were randomly generated such that a logistic regression model would perform at a specified target area under the receiver operating characteristic curve (AUC) of 0.7, 0.8, or 0.9. After fitting a logistic regression with continuous covariates to each data set, continuous variables were dichotomized using data-dependent cut points. A logistic regression was refit, and the coefficients were scaled and rounded to create an integer score. A risk classification system was built by stratifying integer scores into low-, intermediate-, and high-risk tertiles. Discrimination and calibration were assessed by calculating the AUC and index of prediction accuracy (IPA) for each model. The optimism in performance between the training set and test set was calculated for both AUC and IPA. The logistic regression model using the continuous form of covariates outperformed all other models. In the simulation study, converting the logistic regression model to an integer score and subsequent risk classification system incurred an average decrease of 0.057-0.094 in AUC, and an absolute 6.2%-17.5% in IPA. The largest decrease in both AUC and IPA occurred in the dichotomization step. The dichotomization and risk stratification steps also increased the optimism of the resulting models, such that they appeared to be able to predict better than they actually would on new data. In the clinical data set, converting the logistic regression with continuous covariates to an integer score incurred a decrease in externally validated AUC of 0.06 and a decrease in externally validated IPA of 13%. Converting a regression model to an integer score decreases model performance considerably. Therefore, we recommend developing a regression model that incorporates all available information to make the most accurate predictions possible, and using the unaltered regression model when making predictions for individual patients. In all cases, researchers should be mindful that they correctly validate the specific model that is intended for clinical use.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1213/ANE.0000000000005362DOI Listing
June 2021

An Algorithm for Classifying Patients Most Likely to Develop Severe Coronavirus Disease 2019 Illness.

Crit Care Explor 2020 Dec 16;2(12):e0300. Epub 2020 Dec 16.

Neurological Institute, Chief Research Information Officer, Cleveland Clinic, Cleveland, OH.

Objectives: To develop an algorithm that predicts an individualized risk of severe coronavirus disease 2019 illness (i.e., ICU admission or death) upon testing positive for coronavirus disease 2019.

Design: A retrospective cohort study.

Setting: Cleveland Clinic Health System.

Patients: Those hospitalized with coronavirus disease 2019 between March 8, 2020, and July 13, 2020.

Interventions: A temporal coronavirus disease 2019 test positive cut point of June 1 was used to separate the development from validation cohorts. Fine and Gray competing risk regression modeling was performed.

Measurements And Main Results: The development set contained 4,520 patients who tested positive for coronavirus disease 2019 between March 8, 2020, and May 31, 2020. The validation set contained 3,150 patients who tested positive between June 1 and July 13. Approximately 9% of patients were admitted to the ICU or died of coronavirus disease 2019 within 2 weeks of testing positive. A prediction cut point of 15% was proposed. Those who exceed the cutoff have a 21% chance of future severe coronavirus disease 2019, whereas those who do not have a 96% chance of avoiding the severe coronavirus disease 2019. In addition, application of this decision rule identifies 89% of the population at the very low risk of severe coronavirus disease 2019 (< 4%).

Conclusions: We have developed and internally validated an algorithm to assess whether someone is at high risk of admission to the ICU or dying from coronavirus disease 2019, should he or she test positive for coronavirus disease 2019. This risk should be a factor in determining resource allocation, protection from less safe working conditions, and prioritization for vaccination.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/CCE.0000000000000300DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7746202PMC
December 2020

Public Health Interventions' Effect on Hospital Use in Patients With COVID-19: Comparative Study.

JMIR Public Health Surveill 2020 12 23;6(4):e25174. Epub 2020 Dec 23.

Department of Statistics, Xiamen University, Xiamen, China.

Background: Different states in the United States had different nonpharmaceutical public health interventions during the COVID-19 pandemic. The effects of those interventions on hospital use have not been systematically evaluated. The investigation could provide data-driven evidence to potentially improve the implementation of public health interventions in the future.

Objective: We aim to study two representative areas in the United States and one area in China (New York State, Ohio State, and Hubei Province), and investigate the effects of their public health interventions by time periods according to key interventions.

Methods: This observational study evaluated the numbers of infected, hospitalized, and death cases in New York and Ohio from March 16 through September 14, 2020, and Hubei from January 26 to March 31, 2020. We developed novel Bayesian generalized compartmental models. The clinical stages of COVID-19 were stratified in the models, and the effects of public health interventions were modeled through piecewise exponential functions. Time-dependent transmission rates and effective reproduction numbers were estimated. The associations of interventions and the numbers of required hospital and intensive care unit beds were studied.

Results: The interventions of social distancing, home confinement, and wearing masks significantly decreased (in a Bayesian sense) the case incidence and reduced the demand for beds in all areas. Ohio's transmission rates declined before the state's "stay at home" order, which provided evidence that early intervention is important. Wearing masks was significantly associated with reducing the transmission rates after reopening, when comparing New York and Ohio. The centralized quarantine intervention in Hubei played a significant role in further preventing and controlling the disease in that area. The estimated rates that cured patients become susceptible in all areas were small (<0.0001), which indicates that they have little chance to get the infection again.

Conclusions: The series of public health interventions in three areas were temporally associated with the burden of COVID-19-attributed hospital use. Social distancing and the use of face masks should continue to prevent the next peak of the pandemic.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2196/25174DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7759508PMC
December 2020

3-point major cardiovascular event outcome for patients with T2D treated with dipeptidyl peptidase-4 inhibitor or glucagon-like peptide-1 receptor agonist in addition to metformin monotherapy.

Ann Transl Med 2020 Nov;8(21):1345

Quantitative Health Sciences, Cleveland Clinic, Cleveland, OH, USA.

Background: The global incidence of type 2 diabetes (T2D) continues to increase annually, and persons with T2D typically require regular changes in pharmacologic invention for achieving glycemic targets. Healthcare providers must consider multiple factors when selecting a 2nd line. This retrospective cohort study evaluates impact of two common anti-diabetes medication classes (dipeptidyl peptidase-4 inhibitors and glucagon-like peptide-1 receptor agonists) on the well-known composite 3-point major cardiovascular events outcome (3P-MACE, comprised of cardiovascular death, nonfatal myocardial infarction, or nonfatal stroke). No significant impact was found. Persons with T2D face increased risks of many adverse cardiovascular outcomes. This study duplicated common inclusion and exclusion criteria to create an observational cohort from a large healthcare system's electronic health records for testing DPP-4i and GLP-1RA against each other to evaluate impact on likelihood to develop 3P-MACE.

Methods: The statistical model and analyses were based on a cohort of 5,518 adult patients with T2D who were prescribed metformin and either DPP-4i or GLP-1RA to control glycemia during clinic visits between January 2005 and September 2019. A Cox proportional hazards model was developed from the cohort to predict the 3P-MACE endpoint.

Results: The model did not show a meaningful difference in likelihood of developing the 3P-MACE outcome between patients treated with DPP-4i compared to patients treated with GLP-1RA.

Conclusions: Prior history of cardiovascular disease (CVD) did not impact this small difference between the two classes of drug.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.21037/atm-20-4063DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7723528PMC
November 2020

Personalizing Radiotherapy Prescription Dose Using Genomic Markers of Radiosensitivity and Normal Tissue Toxicity in NSCLC.

J Thorac Oncol 2021 03 8;16(3):428-438. Epub 2020 Dec 8.

Department of Quantitative Health Sciences, Lerner Research Institute, Cleveland Clinic, Cleveland, Ohio. Electronic address:

Introduction: Cancer sequencing efforts have revealed that cancer is the most complex and heterogeneous disease that affects humans. However, radiation therapy (RT), one of the most common cancer treatments, is prescribed on the basis of an empirical one-size-fits-all approach. We propose that the field of radiation oncology is operating under an outdated null hypothesis: that all patients are biologically similar and should uniformly respond to the same dose of radiation.

Methods: We have previously developed the genomic-adjusted radiation dose, a method that accounts for biological heterogeneity and can be used to predict optimal RT dose for an individual patient. In this article, we use genomic-adjusted radiation dose to characterize the biological imprecision of one-size-fits-all RT dosing schemes that result in both over- and under-dosing for most patients treated with RT. To elucidate this inefficiency, and therefore the opportunity for improvement using a personalized dosing scheme, we develop a patient-specific competing hazards style mathematical model combining the canonical equations for tumor control probability and normal tissue complication probability. This model simultaneously optimizes tumor control and toxicity by personalizing RT dose using patient-specific genomics.

Results: Using data from two prospectively collected cohorts of patients with NSCLC, we validate the competing hazards model by revealing that it predicts the results of RTOG 0617. We report how the failure of RTOG 0617 can be explained by the biological imprecision of empirical uniform dose escalation which results in 80% of patients being overexposed to normal tissue toxicity without potential tumor control benefit.

Conclusions: Our data reveal a tapestry of radiosensitivity heterogeneity, provide a biological framework that explains the failure of empirical RT dose escalation, and quantify the opportunity to improve clinical outcomes in lung cancer by incorporating genomics into RT.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jtho.2020.11.008DOI Listing
March 2021

Development and Validation of a Clinical Prognostic Stage Group System for Nonmetastatic Prostate Cancer Using Disease-Specific Mortality Results From the International Staging Collaboration for Cancer of the Prostate.

JAMA Oncol 2020 12;6(12):1912-1920

Department of Radiation Oncology, Penn State Cancer Institute, Hershey, Pennsylvania.

Importance: In 2016, the American Joint Committee on Cancer (AJCC) established criteria to evaluate prediction models for staging. No localized prostate cancer models were endorsed by the Precision Medicine Core committee, and 8th edition staging was based on expert consensus.

Objective: To develop and validate a pretreatment clinical prognostic stage group system for nonmetastatic prostate cancer.

Design, Setting, And Participants: This multinational cohort study included 7 centers from the United States, Canada, and Europe, the Shared Equal Access Regional Cancer Hospital (SEARCH) Veterans Affairs Medical Centers collaborative (5 centers), and the Cancer of the Prostate Strategic Urologic Research Endeavor (CaPSURE) registry (43 centers) (the STAR-CAP cohort). Patients with cT1-4N0-1M0 prostate adenocarcinoma treated from January 1, 1992, to December 31, 2013 (follow-up completed December 31, 2017). The STAR-CAP cohort was randomly divided into training and validation data sets; statisticians were blinded to the validation data until the model was locked. A Surveillance, Epidemiology, and End Results (SEER) cohort was used as a second validation set. Analysis was performed from January 1, 2018, to November 30, 2019.

Exposures: Curative intent radical prostatectomy (RP) or radiotherapy with or without androgen deprivation therapy.

Main Outcomes And Measures: Prostate cancer-specific mortality (PCSM). Based on a competing-risk regression model, a points-based Score staging system was developed. Model discrimination (C index), calibration, and overall performance were assessed in the validation cohorts.

Results: Of 19 684 patients included in the analysis (median age, 64.0 [interquartile range (IQR), 59.0-70.0] years), 12 421 were treated with RP and 7263 with radiotherapy. Median follow-up was 71.8 (IQR, 34.3-124.3) months; 4078 (20.7%) were followed up for at least 10 years. Age, T category, N category, Gleason grade, pretreatment serum prostate-specific antigen level, and the percentage of positive core biopsy results among biopsies performed were included as variables. In the validation set, predicted 10-year PCSM for the 9 Score groups ranged from 0.3% to 40.0%. The 10-year C index (0.796; 95% CI, 0.760-0.828) exceeded that of the AJCC 8th edition (0.757; 95% CI, 0.719-0.792), which was improved across age, race, and treatment modality and within the SEER validation cohort. The Score system performed similarly to individualized random survival forest and interaction models and outperformed National Comprehensive Cancer Network (NCCN) and Cancer of the Prostate Risk Assessment (CAPRA) risk grouping 3- and 4-tier classification systems (10-year C index for NCCN 3-tier, 0.729; for NCCN 4-tier, 0.746; for Score, 0.794) as well as CAPRA (10-year C index for CAPRA, 0.760; for Score, 0.782).

Conclusions And Relevance: Using a large, diverse international cohort treated with standard curative treatment options, a proposed AJCC-compliant clinical prognostic stage group system for prostate cancer has been developed. This system may allow consistency of reporting and interpretation of results and clinical trial design.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamaoncol.2020.4922DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7582232PMC
December 2020
-->