Publications by authors named "David B Larson"

102 Publications

Program for Supporting Frontline Improvement Projects in an Academic Radiology Department.

AJR Am J Roentgenol 2021 Apr 28:1-10. Epub 2021 Apr 28.

Imaging Services Department, Stanford Health Care, Stanford, CA.

The purpose of this study was to describe the results of an ongoing program implemented in an academic radiology department to support the execution of small- to medium-size improvement projects led by frontline staff and leaders. Staff members were assigned a coach, were instructed in improvement methods, were given time to work on the project, and presented progress to department leaders in weekly 30-minute reports. Estimated costs and outcomes were calculated for each project and aggregated. An anonymous survey was administered to participants at the end of the first year. A total of 73 participants completed 102 projects in the first 2 years of the program. The project type mix included 25 quality improvement projects, 22 patient satisfaction projects, 14 staff engagement projects, 27 efficiency improvement projects, and 14 regulatory compliance and readiness projects. Estimated annualized outcomes included approximately 4500 labor hours saved, $315K in supply cost savings, $42.2M in potential increased revenues, 8- and 2-point increase in top-box patient experience scores at two clinics, and a 60-incident reduction in near-miss safety events. Participant time equated to approximately 0.35 full-time equivalent positions per year. Approximately 0.4 full-time equivalent was required to support the program. Survey results indicated that the participants generally viewed the program favorably. The program was successful in providing a platform for simultaneously solving a large number of organizational problems while also providing a positive experience to frontline personnel.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2214/AJR.20.23421DOI Listing
April 2021

Optimizing Professional Practice Evaluation to Enable a Nonpunitive Learning Health System Approach to Peer Review.

Pediatr Qual Saf 2021 Jan-Feb;6(1):e375. Epub 2020 Dec 28.

Department of Pediatrics, Stanford University, School of Medicine, Palo Alto, Calif.

Healthcare organizations are focused on 2 different and sometimes conflicting tasks; (1) accelerate the improvement of clinical care delivery and (2) collect provider-specific data to determine the competency of providers. We describe creating a process to meet both of these aims while maintaining a culture that fosters improvement and teamwork.

Methods: We created a new process to sequester activities related to learning and improvement from those focused on individual provider performance. We describe this process, including data on the number and type of cases reviewed and survey results of the participant's perception of the new process.

Results: In the new model, professional practice evaluation committees evaluate events purely to identify system issues and human factors related to medical decision-making, resulting in actional improvements. There are separate and sequestered processes that evaluate concerns around an individual provider's clinical competence or behavior. During the first 5 years of this process, 207 of 217 activities (99.5%) related to system issues rather than issues concerning individual provider competence or behavior. Participants perceived the new process as focused on identifying system errors (4.3/5), nonpunitive (4.2/5), an improvement (4.0/5), and helped with engagement in our system and contributed to wellness (4.0/5).

Conclusion: We believe this sequestered approach has enabled us to achieve both the oversight mandates to ensure provider competence while enabling a learning health systems approach to build the cultural aspects of trust and teamwork that are essential to driving continuous improvement in our system of care.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/pq9.0000000000000375DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7781295PMC
December 2020

CT Volumes from 2,398 Radiology Practices in the United States: A Real-Time Indicator of the Effect of COVID-19 on Routine Care, January to September 2020.

J Am Coll Radiol 2021 03 21;18(3 Pt A):380-387. Epub 2020 Oct 21.

Vice Chair of Education and Clinical Operations, Department of Radiology, Stanford University School of Medicine, Stanford, California.

Purpose: To determine the effect of coronavirus disease 2019 (COVID-19) on CT volumes in the United States during and after the first wave of the pandemic.

Methods: CT volumes from 2,398 US radiology practices participating in the ACR Dose Index Registry from January 1, 2020, to September 30, 2020, were analyzed. Data were compared to projected CT volumes using 2019 normative data and analyzed with respect to time since government orders, population-normalized positive COVID-19 tests, and attributed deaths. Data were stratified by state population density, unemployment status, and race.

Results: There were 16,198,830 CT examinations (2,398 practices). Volume nadir occurred an average of 32 days after each state-of-emergency declaration and 12 days after each stay-at-home order. At nadir, the projected volume loss was 38,043 CTs per day (of 71,626 CTs per day; 53% reduction). Over the entire study period, there were 3,689,874 fewer CT examinations performed than predicted (of 18,947,969; 19% reduction). There was less reduction in states with smaller population density (15% [169,378 of 1,142,247; quartile 1] versus 21% [1,894,152 of 9,140,689; quartile 4]) and less reduction in states with a lower insured unemployed proportion (13% [279,331 of 2,071,251; quartile 1] versus 23% [1,753,521 of 7,496,443; quartile 4]). By September 30, CT volume had returned to 84% (59,856 of 71,321) of predicted; recovery of CT volume occurred as positive COVID-19 tests rose and deaths were in decline.

Conclusion: COVID-19 substantially reduced US CT volume, reflecting delayed and deferred care, especially in states with greater unemployment. Partial volume recovery occurred despite rising positive COVID-19 tests.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2020.10.010DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7577702PMC
March 2021

Regulatory Frameworks for Development and Evaluation of Artificial Intelligence-Based Diagnostic Imaging Algorithms: Summary and Recommendations.

J Am Coll Radiol 2021 Mar 20;18(3 Pt A):413-424. Epub 2020 Oct 20.

Associate Chair, Information Systems, Department of Radiology, Stanford University School of Medicine, Stanford, California.

Although artificial intelligence (AI)-based algorithms for diagnosis hold promise for improving care, their safety and effectiveness must be ensured to facilitate wide adoption. Several recently proposed regulatory frameworks provide a solid foundation but do not address a number of issues that may prevent algorithms from being fully trusted. In this article, we review the major regulatory frameworks for software as a medical device applications, identify major gaps, and propose additional strategies to improve the development and evaluation of diagnostic AI algorithms. We identify the following major shortcomings of the current regulatory frameworks: (1) conflation of the diagnostic task with the diagnostic algorithm, (2) superficial treatment of the diagnostic task definition, (3) no mechanism to directly compare similar algorithms, (4) insufficient characterization of safety and performance elements, (5) lack of resources to assess performance at each installed site, and (6) inherent conflicts of interest. We recommend the following additional measures: (1) separate the diagnostic task from the algorithm, (2) define performance elements beyond accuracy, (3) divide the evaluation process into discrete steps, (4) encourage assessment by a third-party evaluator, (5) incorporate these elements into the manufacturers' development process. Specifically, we recommend four phases of development and evaluation, analogous to those that have been applied to pharmaceuticals and proposed for software applications, to help ensure world-class performance of all algorithms at all installed sites. In the coming years, we anticipate the emergence of a substantial body of research dedicated to ensuring the accuracy, reliability, and safety of the algorithms.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2020.09.060DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7574690PMC
March 2021

Recognizing and Avoiding the Most Common Mistakes in Quality Improvement.

J Am Coll Radiol 2021 Mar 16;18(3 Pt B):511-513. Epub 2020 Oct 16.

Associate Chair, Performance Improvement, Department of Radiology, Stanford University School of Medicine, Stanford, California.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2020.09.053DOI Listing
March 2021

Prospective Deployment of Deep Learning in MRI: A Framework for Important Considerations, Challenges, and Recommendations for Best Practices.

J Magn Reson Imaging 2020 Aug 24. Epub 2020 Aug 24.

Department of Radiology, Stanford University, Stanford, California, USA.

Artificial intelligence algorithms based on principles of deep learning (DL) have made a large impact on the acquisition, reconstruction, and interpretation of MRI data. Despite the large number of retrospective studies using DL, there are fewer applications of DL in the clinic on a routine basis. To address this large translational gap, we review the recent publications to determine three major use cases that DL can have in MRI, namely, that of model-free image synthesis, model-based image reconstruction, and image or pixel-level classification. For each of these three areas, we provide a framework for important considerations that consist of appropriate model training paradigms, evaluation of model robustness, downstream clinical utility, opportunities for future advances, as well recommendations for best current practices. We draw inspiration for this framework from advances in computer vision in natural imaging as well as additional healthcare fields. We further emphasize the need for reproducibility of research studies through the sharing of datasets and software. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 2.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jmri.27331DOI Listing
August 2020

Critical Results in Radiology: Defined by Clinical Judgment or by a List?

J Am Coll Radiol 2021 Feb 9;18(2):294-297. Epub 2020 Aug 9.

Vice Chair for Education and Clinical Operations, Associate Chief Quality Officer for Improvement for Improvement for Stanford Health Care, physician co-leader of the Stanford Medicine Center for Improvement at Stanford University, Stanford University Medical Center, Stanford, California.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2020.07.009DOI Listing
February 2021

Transitioning From Peer Review to Peer Learning: Report of the 2020 Peer Learning Summit.

J Am Coll Radiol 2020 Nov 6;17(11):1499-1508. Epub 2020 Aug 6.

Chair, Department of Radiology, Beth Israel Deaconess Medical Center, Boston, Massachusetts.

Since its introduction nearly 20 years ago, score-based peer review has not been shown to have meaningful impact on or be a valid measurement instrument of radiologist performance. A new paradigm has emerged, peer learning, which is a group activity in which expert professionals review one another's work, actively give and receive feedback in a constructive manner, teach and learn from one another, and mutually commit to improving performance as individuals, as a group, and as a system. Many radiology practices are beginning to transition from score-based peer review to peer learning. To address challenges faced by these practices, a 1-day summit was convened at Harvard Medical School in January 2020, sponsored by the ACR. Several important themes emerged. Elements considered key to a peer-learning program include broad group participation, active identification of learning opportunities, individual feedback, peer-learning conferences, link with process and system improvement activities, preservation of organizational culture, sequestration of peer-learning activities from evaluation mechanisms, and program management. Radiologists and practice leaders are encouraged to develop peer-learning programs tailored to their local practice environment and foster a positive organizational culture. Health system administrators should support active peer-learning programs in the place of score-based peer review. Accrediting organizations should formally recognize peer learning as an acceptable form of peer review and specify minimum criteria for peer-learning programs. IT system vendors should actively collaborate with radiology organizations to develop solutions that support the efficient and effective management of local peer-learning programs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2020.07.016DOI Listing
November 2020

Needs of Referring Providers by Practice Type: Results of a Survey at an Academic Medical Center.

AJR Am J Roentgenol 2021 01 19;216(1):216-224. Epub 2020 Nov 19.

Department of Radiology, Stanford University, 300 Pasteur Dr, Stanford, CA 94305-5105.

The purpose of this study was to test a published hypothetic framework of different referring provider needs for primary care, specialty care, and urgent or emergency care practitioners through questions asked in an annual survey at an academic medical center. Seven questions regarding provider needs were included in an annual online anonymous survey of referring providers. Multiple-choice response options were provided. Differences in responses between provider types were assessed using the Mann-Whitney test. The survey was sent to 3325 providers, and 514 responses were received (response rate, 15.5%). The analysis included 340 responses: 81 from primary care, 205 from specialty care, and 54 from urgent or emergency care. Results indicated that urgent or emergency care providers need examinations to be performed and interpreted more quickly, specialist providers prefer greater radiologist specialization, urgent or emergency care providers order imaging with greater frequency, primary care and urgent or emergency care providers order a greater breadth of imaging, primary care providers report greater reliance on radiologist interpretations, and all provider types highly value direct interactions with radiologists. All results were statistically significant and matched established hypotheses. Our results support the concept that referring providers tend to value different aspects of radiology services differently, according to predictable characteristics. The findings suggest that the concept of value in radiology is highly context-specific and can be evaluated, at least in part, using practice-specific referring provider assessments.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2214/AJR.19.22738DOI Listing
January 2021

Variables Influencing Radiology Volume Recovery During the Next Phase of the Coronavirus Disease 2019 (COVID-19) Pandemic.

J Am Coll Radiol 2020 Jul 1;17(7):855-864. Epub 2020 Jun 1.

Vice Chair of Education and Clinical Operations, Department of Radiology, Stanford University School of Medicine, Stanford, California. Electronic address:

The coronavirus disease 2019 (COVID-19) pandemic has reduced radiology volumes across the country as providers have decreased elective care to minimize the spread of infection and free up health care delivery system capacity. After the stay-at-home order was issued in our county, imaging volumes at our institution decreased to approximately 46% of baseline volumes, similar to the experience of other radiology practices. Given the substantial differences in severity and timing of the disease in different geographic regions, estimating resumption of radiology volumes will be one of the next major challenges for radiology practices. We hypothesize that there are six major variables that will likely predict radiology volumes: (1) severity of disease in the local region, including potential subsequent "waves" of infection; (2) lifting of government social distancing restrictions; (3) patient concern regarding risk of leaving home and entering imaging facilities; (4) management of pent-up demand for imaging delayed during the acute phase of the pandemic, including institutional capacity; (5) impact of the economic downturn on health insurance and ability to pay for imaging; and (6) radiology practice profile reflecting amount of elective imaging performed, including type of patients seen by the radiology practice such as emergency, inpatient, outpatient mix and subspecialty types. We encourage radiology practice leaders to use these and other relevant variables to plan for the coming weeks and to work collaboratively with local health system and governmental leaders to help ensure that needed patient care is restored as quickly as the environment will safely permit.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2020.05.026DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7262523PMC
July 2020

Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework.

Radiology 2020 Jun 24;295(3):675-682. Epub 2020 Mar 24.

From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105.

In this article, the authors propose an ethical framework for using and sharing clinical data for the development of artificial intelligence (AI) applications. The philosophical premise is as follows: when clinical data are used to provide care, the primary purpose for acquiring the data is fulfilled. At that point, clinical data should be treated as a form of public good, to be used for the benefit of future patients. In their 2013 article, Faden et al argued that all who participate in the health care system, including patients, have a moral obligation to contribute to improving that system. The authors extend that framework to questions surrounding the secondary use of clinical data for AI applications. Specifically, the authors propose that all individuals and entities with access to clinical data become data stewards, with fiduciary (or trust) responsibilities to patients to carefully safeguard patient privacy, and to the public to ensure that the data are made widely available for the development of knowledge and tools to benefit future patients. According to this framework, the authors maintain that it is unethical for providers to "sell" clinical data to other parties by granting access to clinical data, especially under exclusive arrangements, in exchange for monetary or in-kind payments that exceed costs. The authors also propose that patient consent is not required before the data are used for secondary purposes when obtaining such consent is prohibitively costly or burdensome, as long as mechanisms are in place to ensure that ethical standards are strictly followed. Rather than debate whether patients or provider organizations "own" the data, the authors propose that clinical data are not owned at all in the traditional sense, but rather that all who interact with or control the data have an obligation to ensure that the data are used for the benefit of future patients and society.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/radiol.2020192536DOI Listing
June 2020

Improving Automated Pediatric Bone Age Estimation Using Ensembles of Models from the 2017 RSNA Machine Learning Challenge.

Radiol Artif Intell 2019 Nov 20;1(6):e190053. Epub 2019 Nov 20.

Department of Radiology, Warren Alpert Medical School, Brown University, 593 Eddy St, Providence, RI 02903 (I.P.); Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI (I.P.); Visiana, Hørsholm, Denmark (H.H.T.); Department of Radiology, Stanford University, Palo Alto, Calif (S.S.H., D.B.L.); and Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.).

Purpose: To investigate improvements in performance for automatic bone age estimation that can be gained through model ensembling.

Materials And Methods: A total of 48 submissions from the 2017 RSNA Pediatric Bone Age Machine Learning Challenge were used. Participants were provided with 12 611 pediatric hand radiographs with bone ages determined by a pediatric radiologist to develop models for bone age determination. The final results were determined using a test set of 200 radiographs labeled with the weighted average of six ratings. The mean pairwise model correlation and performance of all possible model combinations for ensembles of up to 10 models using the mean absolute deviation (MAD) were evaluated. A bootstrap analysis using the 200 test radiographs was conducted to estimate the true generalization MAD.

Results: The estimated generalization MAD of a single model was 4.55 months. The best-performing ensemble consisted of four models with an MAD of 3.79 months. The mean pairwise correlation of models within this ensemble was 0.47. In comparison, the lowest achievable MAD by combining the highest-ranking models based on individual scores was 3.93 months using eight models with a mean pairwise model correlation of 0.67.

Conclusion: Combining less-correlated, high-performing models resulted in better performance than naively combining the top-performing models. Machine learning competitions within radiology should be encouraged to spur development of heterogeneous models whose predictions can be combined to achieve optimal performance.© RSNA, 2019 See also the commentary by Siegel in this issue.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/ryai.2019190053DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6884060PMC
November 2019

Imaging Quality Control in the Era of Artificial Intelligence.

J Am Coll Radiol 2019 Sep 26;16(9 Pt B):1259-1266. Epub 2019 Jun 26.

Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts.

The advent of artificial intelligence (AI) promises to have a transformational impact on quality in medicine, including in radiology. However, experience has shown that quality tools alone are often not sufficient to bring about consistent excellent performance. Specifically, rather than assuming outcome targets are consistently met, in quality control, managers assume that wide variation is likely present unless proven otherwise with objective performance data. In this article, we discuss what we consider to be the eight essential elements required to achieve comprehensive process control, necessary to deliver consistent quality in radiology: a process control framework, performance measures, performance standards and targets, monitoring applications, prediction models, optimization models, feedback mechanisms, and accountability mechanisms. We consider these elements to be universally applicable, including in the application of AI-based models. We also discuss how the lack of specific elements of a quality control program can hinder widespread quality control efforts. We illustrate the concept using the example of a CT radiation dose optimization and process control program previously developed by one of the authors and provide several examples of how AI-based tools might be used for quality control in radiology.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2019.05.048DOI Listing
September 2019

Deep learning to automate Brasfield chest radiographic scoring for cystic fibrosis.

J Cyst Fibros 2020 01 2;19(1):131-138. Epub 2019 May 2.

Department of Radiology, Stanford University School of Medicine, 725 Welch Road, Stanford, CA 94305, USA.

Background: The aim of this study was to evaluate the hypothesis that a deep convolutional neural network (DCNN) model could facilitate automated Brasfield scoring of chest radiographs (CXRs) for patients with cystic fibrosis (CF), performing similarly to a pediatric radiologist.

Methods: All frontal/lateral chest radiographs (2058 exams) performed in CF patients at a single institution from January 2008-2018 were retrospectively identified, and ground-truth Brasfield scoring performed by a board-certified pediatric radiologist. 1858 exams (90.3%) were used to train and validate the DCNN model, while 200 exams (9.7%) were reserved for a test set. Five board-certified pediatric radiologists independently scored the test set according to the Brasfield method. DCNN model vs. radiologist performance was compared using Spearman correlation (ρ) as well as mean difference (MD), mean absolute difference (MAD), and root mean squared error (RMSE) estimation.

Results: For the total Brasfield score, ρ for the model-derived results computed pairwise with each radiologist's scores ranged from 0.79-0.83, compared to 0.85-0.90 for radiologist vs. radiologist scores. The MD between model estimates of the total Brasfield score and the average score of radiologists was -0.09. Based on MD, MAD, and RMSE, the model matched or exceeded radiologist performance for all subfeatures except air-trapping and large lesions.

Conclusions: A DCNN model is promising for predicting CF Brasfield scores with accuracy similar to that of a pediatric radiologist.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jcf.2019.04.016DOI Listing
January 2020

Quality and safety in pediatric radiology.

Pediatr Radiol 2019 04 29;49(4):431-432. Epub 2019 Mar 29.

Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00247-019-04353-0DOI Listing
April 2019

Measuring Diagnostic Radiologists: What Measurements Should We Use?

J Am Coll Radiol 2019 Mar 2;16(3):333-335. Epub 2019 Feb 2.

Department of Radiology, Stanford University School of Medicine, Stanford California.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2018.12.011DOI Listing
March 2019

Re: "Reducing Variability of Radiation Dose in CT".

Authors:
David B Larson

J Am Coll Radiol 2018 12;15(12):1669-1670

Associate Professor of Pediatric Radiology, Stanford University School of Medicine, 300 Pasteur Drive, Stanford CA 94305-5105. Electronic address:

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2018.07.008DOI Listing
December 2018

Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet.

PLoS Med 2018 11 27;15(11):e1002699. Epub 2018 Nov 27.

Department of Radiology, Stanford University, Stanford, California, United States of America.

Background: Magnetic resonance imaging (MRI) of the knee is the preferred method for diagnosing knee injuries. However, interpretation of knee MRI is time-intensive and subject to diagnostic error and variability. An automated system for interpreting knee MRI could prioritize high-risk patients and assist clinicians in making diagnoses. Deep learning methods, in being able to automatically learn layers of features, are well suited for modeling the complex relationships between medical images and their interpretations. In this study we developed a deep learning model for detecting general abnormalities and specific diagnoses (anterior cruciate ligament [ACL] tears and meniscal tears) on knee MRI exams. We then measured the effect of providing the model's predictions to clinical experts during interpretation.

Methods And Findings: Our dataset consisted of 1,370 knee MRI exams performed at Stanford University Medical Center between January 1, 2001, and December 31, 2012 (mean age 38.0 years; 569 [41.5%] female patients). The majority vote of 3 musculoskeletal radiologists established reference standard labels on an internal validation set of 120 exams. We developed MRNet, a convolutional neural network for classifying MRI series and combined predictions from 3 series per exam using logistic regression. In detecting abnormalities, ACL tears, and meniscal tears, this model achieved area under the receiver operating characteristic curve (AUC) values of 0.937 (95% CI 0.895, 0.980), 0.965 (95% CI 0.938, 0.993), and 0.847 (95% CI 0.780, 0.914), respectively, on the internal validation set. We also obtained a public dataset of 917 exams with sagittal T1-weighted series and labels for ACL injury from Clinical Hospital Centre Rijeka, Croatia. On the external validation set of 183 exams, the MRNet trained on Stanford sagittal T2-weighted series achieved an AUC of 0.824 (95% CI 0.757, 0.892) in the detection of ACL injuries with no additional training, while an MRNet trained on the rest of the external data achieved an AUC of 0.911 (95% CI 0.864, 0.958). We additionally measured the specificity, sensitivity, and accuracy of 9 clinical experts (7 board-certified general radiologists and 2 orthopedic surgeons) on the internal validation set both with and without model assistance. Using a 2-sided Pearson's chi-squared test with adjustment for multiple comparisons, we found no significant differences between the performance of the model and that of unassisted general radiologists in detecting abnormalities. General radiologists achieved significantly higher sensitivity in detecting ACL tears (p-value = 0.002; q-value = 0.019) and significantly higher specificity in detecting meniscal tears (p-value = 0.003; q-value = 0.019). Using a 1-tailed t test on the change in performance metrics, we found that providing model predictions significantly increased clinical experts' specificity in identifying ACL tears (p-value < 0.001; q-value = 0.006). The primary limitations of our study include lack of surgical ground truth and the small size of the panel of clinical experts.

Conclusions: Our deep learning model can rapidly generate accurate clinical pathology classifications of knee MRI exams from both internal and external datasets. Moreover, our results support the assertion that deep learning models can improve the performance of clinical experts during medical imaging interpretation. Further research is needed to validate the model prospectively and to determine its utility in the clinical setting.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1371/journal.pmed.1002699DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6258509PMC
November 2018

Strategies for Implementing a Standardized Structured Radiology Reporting Program.

Authors:
David B Larson

Radiographics 2018 Oct;38(6):1705-1716

From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105.

Radiology practices are increasingly implementing standardized report templates to overcome the drawbacks of individual templates. However, implementing a standardized structured reporting program is not necessarily straightforward. This article provides practical guidance for radiologists who wish to implement standardized structured reporting in their practice. Challenges that radiology groups encounter tend to fall into two categories: technical and organizational. Defining and carrying out technical work can be tedious but tends to be relatively straightforward, whereas overcoming organizational challenges often requires changes in individuals' strongly held values, beliefs, roles, and relationships. Established organizational change models can help frame the organizational strategy to implement a standardized structured reporting program. Once leadership support is secured, a standardized structured reporting committee can be convened to establish report priorities, standards, design principles, and guidelines. Report standards help to establish the common framework upon which all report templates are constructed, helping to ensure report consistency. By using these standards, committee members can create reports relevant to their subspecialties, which can then be edited for formatting and content. Once report templates have been developed, edited, and published, an abbreviated form of the same process can be used to maintain the reports, which can be accomplished with much less effort than that initially required to create the templates. After standardized structured report templates are implemented and become embedded in practice, most radiologists eventually appreciate the merits of the program. RSNA, 2018.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/rg.2018180040DOI Listing
October 2018

Strategies for Radiology to Thrive in the Value Era.

Radiology 2018 10 4;289(1):3-7. Epub 2018 Sep 4.

From the Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, One Deaconess Rd, Boston, MA 02215 (J.B.K.); and Department of Radiology, Stanford University School of Medicine, Stanford, Calif (D.B.L.).

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/radiol.2018180190DOI Listing
October 2018

Improving and Maintaining Radiologic Technologist Skill Using a Medical Director Partnership and Technologist Coaching Model.

AJR Am J Roentgenol 2018 11 31;211(5):986-992. Epub 2018 Jul 31.

1 Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105.

Objective: Consistent excellence in radiologic technologist performance, including ensuring high technical image quality, patient safety and comfort, and efficient workflow, largely depends on individual technologist skill. However, sustained growth in the size and complexity of health care organizations has increased the difficulty in developing and maintaining technologist expertise. In this article, we explore underlying organizational structures that contribute to this problem and propose organizational models to promote continued excellence in technologist skill.

Conclusion: We have found that a relatively modest investment in medical directorship combined with a coaching model can bring about a significant level of improvement in skilled clinical performance. We believe that widespread implementation of similar programs could contribute to substantial improvements in quality in radiology and other health care settings.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2214/AJR.18.19970DOI Listing
November 2018

Improving Performance of Mammographic Breast Positioning in an Academic Radiology Practice.

AJR Am J Roentgenol 2018 Apr 7;210(4):807-815. Epub 2018 Feb 7.

1 Department of Radiology, Stanford University, 300 Pasteur Dr, Stanford, CA 94305-5105.

Objective: The purpose of this project was to achieve sustained improvement in mammographic breast positioning in our department.

Materials And Methods: Between June 2013 and December 2016, we conducted a team-based performance improvement initiative with the goal of improving mammographic positioning. The team of technologists and radiologists established quantitative measures of positioning performance based on American College of Radiology (ACR) criteria, audited at least 35 mammograms per week for positioning quality, displayed performance in dashboards, provided technologists with positioning training, developed a supportive environment fostering technologist and radiologist communication surrounding mammographic positioning, and employed a mammography positioning coach to develop, improve, and maintain technologist positioning performance. Statistical significance in changes in the percentage of mammograms passing the ACR criteria were evaluated using a two-proportion z test.

Results: A baseline mammogram audit performed in June 2013 showed that 67% (82/122) met ACR passing criteria for positioning. Performance improved to 80% (588/739; p < 0.01) after positioning training and technologist and radiologist agreement on positioning criteria. With individual technologist feedback, positioning further improved, with 91% of mammograms passing ACR criteria (p < 0.01). Seven months later, performance temporarily decreased to 80% but improved to 89% with implementation of a positioning coach. The overall mean performance of 91% has been sustained for 23 months. The program cost approximately $30,000 to develop, $42,000 to launch, and $25,000 per year to maintain. Almost all costs were related to personnel time.

Conclusion: Dedicated performance improvement methods may achieve significant and sustained improvement in mammographic breast positioning, which may better enable facilities to pass the recently instated Enhancing Quality Using the Inspection Program portion of a practice's annual Mammography Quality Standards Act inspections.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2214/AJR.17.18212DOI Listing
April 2018

Practical Suggestions on How to Move From Peer Review to Peer Learning.

AJR Am J Roentgenol 2018 Mar 11;210(3):578-582. Epub 2018 Jan 11.

4 Beth Israel Deaconess Medical Center, Boston, MA.

Objective: The purpose of this article is to outline practical steps that a department can take to transition to a peer learning model.

Conclusion: The 2015 Institute of Medicine report on improving diagnosis emphasized that organizations and industries that embrace error as an opportunity to learn tend to outperform those that do not. To meet this charge, radiology must transition from a peer review to a peer learning approach.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2214/AJR.17.18660DOI Listing
March 2018

Deep Learning to Classify Radiology Free-Text Reports.

Radiology 2018 03 13;286(3):845-852. Epub 2017 Nov 13.

From the Department of Radiology, Stanford University School of Medicine, Stanford University Medical Center, 725 Welch Rd, Room 1675, Stanford, Calif 94305-5913 (M.C.C., N.M., D.B.L., C.P.L., M.P.L.); Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, Calif (R.L.B., L.Y.); Department of Bioinformatics, University of Utah Medical Center, Salt Lake City, Utah (B.E.C.); and Department of Radiology, Duke University Medical Center, Durham, NC (T.J.A.).

Purpose To evaluate the performance of a deep learning convolutional neural network (CNN) model compared with a traditional natural language processing (NLP) model in extracting pulmonary embolism (PE) findings from thoracic computed tomography (CT) reports from two institutions. Materials and Methods Contrast material-enhanced CT examinations of the chest performed between January 1, 1998, and January 1, 2016, were selected. Annotations by two human radiologists were made for three categories: the presence, chronicity, and location of PE. Classification of performance of a CNN model with an unsupervised learning algorithm for obtaining vector representations of words was compared with the open-source application PeFinder. Sensitivity, specificity, accuracy, and F1 scores for both the CNN model and PeFinder in the internal and external validation sets were determined. Results The CNN model demonstrated an accuracy of 99% and an area under the curve value of 0.97. For internal validation report data, the CNN model had a statistically significant larger F1 score (0.938) than did PeFinder (0.867) when classifying findings as either PE positive or PE negative, but no significant difference in sensitivity, specificity, or accuracy was found. For external validation report data, no statistical difference between the performance of the CNN model and PeFinder was found. Conclusion A deep learning CNN model can classify radiology free-text reports with accuracy equivalent to or beyond that of an existing traditional NLP model. RSNA, 2017 Online supplemental material is available for this article.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/radiol.2017171115DOI Listing
March 2018

Performance of a Deep-Learning Neural Network Model in Assessing Skeletal Maturity on Pediatric Hand Radiographs.

Radiology 2018 Apr 2;287(1):313-322. Epub 2017 Nov 2.

From the Departments of Radiology (D.B.L., M.P.L., S.S.H., C.P.L.), Computer Science (M.C.C.), and Biomedical Informatics (C.P.L.), Stanford University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105; and Department of Radiology, Children's Hospital Colorado, Aurora, Colo (N.V.S.).

Purpose To compare the performance of a deep-learning bone age assessment model based on hand radiographs with that of expert radiologists and that of existing automated models. Materials and Methods The institutional review board approved the study. A total of 14 036 clinical hand radiographs and corresponding reports were obtained from two children's hospitals to train and validate the model. For the first test set, composed of 200 examinations, the mean of bone age estimates from the clinical report and three additional human reviewers was used as the reference standard. Overall model performance was assessed by comparing the root mean square (RMS) and mean absolute difference (MAD) between the model estimates and the reference standard bone ages. Ninety-five percent limits of agreement were calculated in a pairwise fashion for all reviewers and the model. The RMS of a second test set composed of 913 examinations from the publicly available Digital Hand Atlas was compared with published reports of an existing automated model. Results The mean difference between bone age estimates of the model and of the reviewers was 0 years, with a mean RMS and MAD of 0.63 and 0.50 years, respectively. The estimates of the model, the clinical report, and the three reviewers were within the 95% limits of agreement. RMS for the Digital Hand Atlas data set was 0.73 years, compared with 0.61 years of a previously reported model. Conclusion A deep-learning convolutional neural network model can estimate skeletal maturity with accuracy similar to that of an expert radiologist and to that of existing automated models. RSNA, 2017 An earlier incorrect version of this article appeared online. This article was corrected on January 19, 2018.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/radiol.2017170236DOI Listing
April 2018

Decreasing Stroke Code to CT Time in Patients Presenting with Stroke Symptoms.

Radiographics 2017 Sep-Oct;37(5):1559-1568. Epub 2017 Aug 18.

From the Departments of Radiology (A.K., L.J.M., D.M., C.Z., M.W., D.B.L.), Neurology and Neurological Sciences (S.C., N.V.), and Emergency Medicine (G.O.), Stanford University School of Medicine, Stanford Hospital and Clinics, Stanford, Calif; and Neuroscience Service Line, Department of Medicine, Christiana Care Health System, Newark, Del (W.A.T.).

Guided quality improvement (QI) programs present an effective means to streamline stroke code to computed tomography (CT) times in a comprehensive stroke center. Applying QI methods and a multidisciplinary team approach may decrease the stroke code to CT time in non-prenotified emergency department (ED) patients presenting with symptoms of stroke. The aim of this project was to decrease this time for non-prenotified stroke code patients from a baseline mean of 20 minutes to one less than 15 minutes during an 18-week period by applying QI methods in the context of a structured QI program. By reducing this time, it was expected that the door-to-CT time guideline of 25 minutes could be met more consistently. Through the structured QI program, we gained an understanding of the process that enabled us to effectively identify key drivers of performance to guide project interventions. As a result of these interventions, the stroke code to CT time for non-prenotified stroke code patients decreased to a mean of less than 14 minutes. This article reports these methods and results so that others can similarly improve the time it takes to perform nonenhanced CT studies in non-prenotified stroke code patients in the ED. RSNA, 2017.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/rg.2017160190DOI Listing
December 2017

The Role of Radiology in the Diagnostic Process: Information, Communication, and Teamwork.

AJR Am J Roentgenol 2017 Nov 25;209(5):992-1000. Epub 2017 Jul 25.

1 Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, Stanford, CA 94305-5105.

Objective: The diagnostic radiology process represents a partnership between clinical and radiology teams. As such, breakdowns in interpersonal interactions and communication can result in patient harm.

Conclusion: We explore the role of radiology in the diagnostic process, focusing on key concepts of information and communication, as well as key interpersonal interactions of teamwork, collaboration, and collegiality, all based on trust. We propose 10 principles to facilitate effective information flow in the diagnostic process.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2214/AJR.17.18381DOI Listing
November 2017

Improving efficiency in the radiology department.

Pediatr Radiol 2017 Jun 23;47(7):783-792. Epub 2017 May 23.

Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA.

The modern radiology department is built around the flow of information. Ordering providers request imaging studies to be performed, technologists complete the work required to perform the imaging studies, and radiologists interpret and report on the imaging findings. As each of these steps is performed, data flow between multiple information systems, most notably the radiology information system (RIS), the picture archiving and communication system (PACS) and the voice dictation system. Even though data flow relatively seamlessly, the majority of our systems and processes are inefficient. The purpose of this article is to describe the radiology value stream and describe how radiology informaticists in one department have worked to improve the efficiency of the value stream at each step. Through these examples, we identify and describe several themes that we believe have been crucial to our success.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00247-017-3828-7DOI Listing
June 2017

Understanding and Applying the Concept of Value Creation in Radiology.

J Am Coll Radiol 2017 Apr 20;14(4):549-557. Epub 2017 Feb 20.

Henry Ford Health System, Detroit, Michigan.

The concept of value in radiology has been strongly advocated in recent years as a means of advancing patient care and decreasing waste. This article explores the concept of value creation in radiology and offers a framework for how radiology practices can create value according to the needs of their referring clinicians. Value only exists in the eyes of a customer. We propose that the primary purpose of diagnostic radiology is to answer clinical questions using medical imaging to help guide management of patient care. Because they are the direct recipient of this service, we propose that referring clinicians are the direct customers of a radiology practice and patients are indirect customers. Radiology practices create value as they understand and fulfill their referring clinicians' needs. To narrow those needs to actionable categories, we propose a framework consisting of four major dimensions: (1) how quickly the clinical question needs to be answered, (2) the degree of specialization required to answer the question, (3) how often the referring clinician uses imaging, and (4) the breadth of imaging that the referring clinician uses. We further identify three major settings in which referring clinicians utilize radiological services: (1) emergent or urgent care, (2) primary care, and (3) specialty care. Practices best meet these needs as they engage with their referring clinicians, create a shared vision, work together as a cohesive team, structure the organization to meet referring clinicians' needs, build the tools, and continually improve in ways that help referring clinicians care for patients.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jacr.2016.12.023DOI Listing
April 2017

Reducing Functional MR Imaging Acquisition Times by Optimizing Workflow.

Radiographics 2017 Jan-Feb;37(1):316-322

From the Department of Radiology, Stanford Health Care, Lucas Center for Imaging, 1201 Welch Rd, Room P271, Stanford, CA 94305.

Functional magnetic resonance (MR) imaging is a complex, specialized examination that is able to noninvasively measure information critical to patient care such as hemispheric language lateralization ( 1 ). Diagnostic functional MR imaging requires extensive patient interaction as well as the coordinated efforts of the entire health care team. We observed in our practice at an academic center that the times to perform functional MR imaging examinations were excessively lengthy, making scheduling of the examination difficult. The purpose of our project was to reduce functional MR imaging acquisition times by increasing the efficiency of our workflow, using specific quality tools to drive improvement of functional MR imaging. We assembled a multidisciplinary team and retrospectively reviewed all functional MR imaging examinations performed at our institution from January 2013 to August 2015. We identified five key drivers: (a) streamlined protocols, (b) consistent patient monitoring, (c) clear visual slides and audio, (d) improved patient understanding, and (e) minimized patient motion. We then implemented four specific interventions over a period of 10 months: (a) eliminating intravenous contrast medium, (b) reducing repeated language paradigms, (c) updating technologist and physician checklists, and (d) updating visual slides and audio. Our mean functional MR imaging acquisition time was reduced from 76.3 to 53.2 minutes, while our functional MR imaging examinations remained of diagnostic quality. As a result, we reduced our routine scheduling time for functional MR imaging from 2 hours to 1 hour, improving patient comfort and satisfaction as well as saving time for additional potential MR imaging acquisitions. Our efforts to optimize functional MR imaging workflow constitute a practice quality improvement project that is beneficial for patient care and can be applied broadly to other functional MR imaging practices. RSNA, 2017.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/rg.2017160035DOI Listing
September 2017