J Clin Oncol 2019 Aug 19;37(23):2062-2071. Epub 2019 Jun 19.
1Fox Chase Cancer Center, Philadelphia, PA.
Purpose: To validate currently used recurrence prediction models for renal cell carcinoma (RCC) by using prospective data from the ASSURE (ECOG-ACRIN E2805; Adjuvant Sorafenib or Sunitinib for Unfavorable Renal Carcinoma) adjuvant trial.
Patients And Methods: Eight RCC recurrence models (University of California at Los Angeles Integrated Staging System [UISS]; Stage, Size, Grade, and Necrosis [SSIGN]; Leibovich; Kattan; Memorial Sloan Kettering Cancer Center [MSKCC]; Yaycioglu; Karakiewicz; and Cindolo) were selected on the basis of their use in clinical practice and clinical trial designs. These models along with the TNM staging system were validated using 1,647 patients with resected localized high-grade or locally advanced disease (≥ pT1b grade 3 and 4/pTanyN1Mo) from the ASSURE cohort. The predictive performance of the model was quantified by assessing its discriminatory and calibration abilities.
Results: Prospective validation of predictive and prognostic models for localized RCC showed a substantial decrease in each of the predictive abilities of the model compared with their original and externally validated discriminatory estimates. Among the models, the SSIGN score performed best (0.688; 95% CI, 0.686 to 0.689), and the UISS model performed worst (0.556; 95% CI, 0.555 to 0.557). Compared with the 2002 TNM staging system (C-index, 0.60), most models only marginally outperformed standard staging. Importantly, all models, including TNM, demonstrated statistically significant variability in their predictive ability over time and were most useful within the first 2 years after diagnosis.
Conclusion: In RCC, as in many other solid malignancies, clinicians rely on retrospective prediction tools to guide patient care and clinical trial selection and largely overestimate their predictive abilities. We used prospective collected adjuvant trial data to validate existing RCC prediction models and demonstrate a sharp decrease in the predictive ability of all models compared with their previous retrospective validations. Accordingly, we recommend prospective validation of any predictive model before implementing it into clinical practice and clinical trial design.