Publications by authors named "Andrew Bishara"

6 Publications

  • Page 1 of 1

Postoperative delirium prediction using machine learning models and preoperative electronic health record data.

BMC Anesthesiol 2022 01 3;22(1). Epub 2022 Jan 3.

Department of Anesthesia and Perioperative Care, University of California, San Francisco, 521 Parnassus Avenue, San Francisco, CA, 94143, USA.

Background: Accurate, pragmatic risk stratification for postoperative delirium (POD) is necessary to target preventative resources toward high-risk patients. Machine learning (ML) offers a novel approach to leveraging electronic health record (EHR) data for POD prediction. We sought to develop and internally validate a ML-derived POD risk prediction model using preoperative risk features, and to compare its performance to models developed with traditional logistic regression.

Methods: This was a retrospective analysis of preoperative EHR data from 24,885 adults undergoing a procedure requiring anesthesia care, recovering in the main post-anesthesia care unit, and staying in the hospital at least overnight between December 2016 and December 2019 at either of two hospitals in a tertiary care health system. One hundred fifteen preoperative risk features including demographics, comorbidities, nursing assessments, surgery type, and other preoperative EHR data were used to predict postoperative delirium (POD), defined as any instance of Nursing Delirium Screening Scale ≥2 or positive Confusion Assessment Method for the Intensive Care Unit within the first 7 postoperative days. Two ML models (Neural Network and XGBoost), two traditional logistic regression models ("clinician-guided" and "ML hybrid"), and a previously described delirium risk stratification tool (AWOL-S) were evaluated using the area under the receiver operating characteristic curve (AUC-ROC), sensitivity, specificity, positive likelihood ratio, and positive predictive value. Model calibration was assessed with a calibration curve. Patients with no POD assessments charted or at least 20% of input variables missing were excluded.

Results: POD incidence was 5.3%. The AUC-ROC for Neural Net was 0.841 [95% CI 0. 816-0.863] and for XGBoost was 0.851 [95% CI 0.827-0.874], which was significantly better than the clinician-guided (AUC-ROC 0.763 [0.734-0.793], p < 0.001) and ML hybrid (AUC-ROC 0.824 [0.800-0.849], p < 0.001) regression models and AWOL-S (AUC-ROC 0.762 [95% CI 0.713-0.812], p < 0.001). Neural Net, XGBoost, and ML hybrid models demonstrated excellent calibration, while calibration of the clinician-guided and AWOL-S models was moderate; they tended to overestimate delirium risk in those already at highest risk.

Conclusion: Using pragmatically collected EHR data, two ML models predicted POD in a broad perioperative population with high discrimination. Optimal application of the models would provide automated, real-time delirium risk stratification to improve perioperative management of surgical patients at risk for POD.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s12871-021-01543-yDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8722098PMC
January 2022

Opal: an implementation science tool for machine learning clinical decision support in anesthesia.

J Clin Monit Comput 2021 Nov 27. Epub 2021 Nov 27.

Bakar Computational Health Sciences Institute, University of California San Francisco, San Francisco, CA, USA.

Opal is the first published example of a full-stack platform infrastructure for an implementation science designed for ML in anesthesia that solves the problem of leveraging ML for clinical decision support. Users interact with a secure online Opal web application to select a desired operating room (OR) case cohort for data extraction, visualize datasets with built-in graphing techniques, and run in-client ML or extract data for external use. Opal was used to obtain data from 29,004 unique OR cases from a single academic institution for pre-operative prediction of post-operative acute kidney injury (AKI) based on creatinine KDIGO criteria using predictors which included pre-operative demographic, past medical history, medications, and flowsheet information. To demonstrate utility with unsupervised learning, Opal was also used to extract intra-operative flowsheet data from 2995 unique OR cases and patients were clustered using PCA analysis and k-means clustering. A gradient boosting machine model was developed using an 80/20 train to test ratio and yielded an area under the receiver operating curve (ROC-AUC) of 0.85 with 95% CI [0.80-0.90]. At the default probability decision threshold of 0.5, the model sensitivity was 0.9 and the specificity was 0.8. K-means clustering was performed to partition the cases into two clusters and for hypothesis generation of potential groups of outcomes related to intraoperative vitals. Opal's design has created streamlined ML functionality for researchers and clinicians in the perioperative setting and opens the door for many future clinical applications, including data mining, clinical simulation, high-frequency prediction, and quality improvement.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10877-021-00774-1DOI Listing
November 2021

Machine Learning Prediction of Liver Allograft Utilization From Deceased Organ Donors Using the National Donor Management Goals Registry.

Transplant Direct 2021 Oct 27;7(10):e771. Epub 2021 Sep 27.

Department of Anesthesia and Perioperative Care, University of California San Francisco, San Francisco, CA.

Early prediction of whether a liver allograft will be utilized for transplantation may allow better resource deployment during donor management and improve organ allocation. The national donor management goals (DMG) registry contains critical care data collected during donor management. We developed a machine learning model to predict transplantation of a liver graft based on data from the DMG registry.

Methods: Several machine learning classifiers were trained to predict transplantation of a liver graft. We utilized 127 variables available in the DMG dataset. We included data from potential deceased organ donors between April 2012 and January 2019. The outcome was defined as liver recovery for transplantation in the operating room. The prediction was made based on data available 12-18 h after the time of authorization for transplantation. The data were randomly separated into training (60%), validation (20%), and test sets (20%). We compared the performance of our models to the Liver Discard Risk Index.

Results: Of 13 629 donors in the dataset, 9255 (68%) livers were recovered and transplanted, 1519 recovered but used for research or discarded, 2855 were not recovered. The optimized gradient boosting machine classifier achieved an area under the curve of the receiver operator characteristic of 0.84 on the test set, outperforming all other classifiers.

Conclusions: This model predicts successful liver recovery for transplantation in the operating room, using data available early during donor management. It performs favorably when compared to existing models. It may provide real-time decision support during organ donor management and transplant logistics.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/TXD.0000000000001212DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8478404PMC
October 2021

Coronary artery disease detection using artificial intelligence techniques: A survey of trends, geographical differences and diagnostic features 1991-2020.

Comput Biol Med 2021 01 28;128:104095. Epub 2020 Oct 28.

Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan.

While coronary angiography is the gold standard diagnostic tool for coronary artery disease (CAD), but it is associated with procedural risk, it is an invasive technique requiring arterial puncture, and it subjects the patient to radiation and iodinated contrast exposure. Artificial intelligence (AI) can provide a pretest probability of disease that can be used to triage patients for angiography. This review comprehensively investigates published papers in the domain of CAD detection using different AI techniques from 1991 to 2020, in order to discern broad trends and geographical differences. Moreover, key decision factors affecting CAD diagnosis are identified for different parts of the world by aggregating the results from different studies. In this study, all datasets that have been used for the studies for CAD detection, their properties, and achieved performances using various AI techniques, are presented, compared, and analyzed. In particular, the effectiveness of machine learning (ML) and deep learning (DL) techniques to diagnose and predict CAD are reviewed. From PubMed, Scopus, Ovid MEDLINE, and Google Scholar search, 500 papers were selected to be investigated. Among these selected papers, 256 papers met our criteria and hence were included in this study. Our findings demonstrate that AI-based techniques have been increasingly applied for the detection of CAD since 2008. AI-based techniques that utilized electrocardiography (ECG), demographic characteristics, symptoms, physical examination findings, and heart rate signals, reported high accuracy for the detection of CAD. In these papers, the authors ranked the features based on their assessed clinical importance with ML techniques. The results demonstrate that the attribution of the relative importance of ML features for CAD diagnosis is different among countries. More recently, DL methods have yielded high CAD detection performance using ECG signals, which drives its burgeoning adoption.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compbiomed.2020.104095DOI Listing
January 2021

Development and validation of machine learning models to predict gastrointestinal leak and venous thromboembolism after weight loss surgery: an analysis of the MBSAQIP database.

Surg Endosc 2021 01 17;35(1):182-191. Epub 2020 Jan 17.

Institute for Health System Innovation and Policy, Boston University, 601, 656 Beacon Street, Boston, MA, 02215, USA.

Background: Postoperative gastrointestinal leak and venous thromboembolism (VTE) are devastating complications of bariatric surgery. The performance of currently available predictive models for these complications remains wanting, while machine learning has shown promise to improve on traditional modeling approaches. The purpose of this study was to compare the ability of two machine learning strategies, artificial neural networks (ANNs), and gradient boosting machines (XGBs) to conventional models using logistic regression (LR) in predicting leak and VTE after bariatric surgery.

Methods: ANN, XGB, and LR prediction models for leak and VTE among adults undergoing initial elective weight loss surgery were trained and validated using preoperative data from 2015 to 2017 from Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program database. Data were randomly split into training, validation, and testing populations. Model performance was measured by the area under the receiver operating characteristic curve (AUC) on the testing data for each model.

Results: The study cohort contained 436,807 patients. The incidences of leak and VTE were 0.70% and 0.46%. ANN (AUC 0.75, 95% CI 0.73-0.78) was the best-performing model for predicting leak, followed by XGB (AUC 0.70, 95% CI 0.68-0.72) and then LR (AUC 0.63, 95% CI 0.61-0.65, p < 0.001 for all comparisons). In detecting VTE, ANN, and XGB, LR achieved similar AUCs of 0.65 (95% CI 0.63-0.68), 0.67 (95% CI 0.64-0.70), and 0.64 (95% CI 0.61-0.66), respectively; the performance difference between XGB and LR was statistically significant (p = 0.001).

Conclusions: ANN and XGB outperformed traditional LR in predicting leak. These results suggest that ML has the potential to improve risk stratification for bariatric surgery, especially as techniques to extract more granular data from medical records improve. Further studies investigating the merits of machine learning to improve patient selection and risk management in bariatric surgery are warranted.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00464-020-07378-xDOI Listing
January 2021

Reduced-gravity environment hardware demonstrations of a prototype miniaturized flow cytometer and companion microfluidic mixing technology.

J Vis Exp 2014 Nov 13(93):e51743. Epub 2014 Nov 13.

DNA Medicine Institute;

Until recently, astronaut blood samples were collected in-flight, transported to earth on the Space Shuttle, and analyzed in terrestrial laboratories. If humans are to travel beyond low Earth orbit, a transition towards space-ready, point-of-care (POC) testing is required. Such testing needs to be comprehensive, easy to perform in a reduced-gravity environment, and unaffected by the stresses of launch and spaceflight. Countless POC devices have been developed to mimic laboratory scale counterparts, but most have narrow applications and few have demonstrable use in an in-flight, reduced-gravity environment. In fact, demonstrations of biomedical diagnostics in reduced gravity are limited altogether, making component choice and certain logistical challenges difficult to approach when seeking to test new technology. To help fill the void, we are presenting a modular method for the construction and operation of a prototype blood diagnostic device and its associated parabolic flight test rig that meet the standards for flight-testing onboard a parabolic flight, reduced-gravity aircraft. The method first focuses on rig assembly for in-flight, reduced-gravity testing of a flow cytometer and a companion microfluidic mixing chip. Components are adaptable to other designs and some custom components, such as a microvolume sample loader and the micromixer may be of particular interest. The method then shifts focus to flight preparation, by offering guidelines and suggestions to prepare for a successful flight test with regard to user training, development of a standard operating procedure (SOP), and other issues. Finally, in-flight experimental procedures specific to our demonstrations are described.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3791/51743DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4354048PMC
November 2014
-->