Publications by authors named "Skye Aaron"

18 Publications

  • Page 1 of 1

Uncontrolled blood pressure and treatment of hypertension in older chronic kidney disease patients.

J Am Geriatr Soc 2021 10 5;69(10):2985-2987. Epub 2021 Jun 5.

Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/jgs.17304DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8497406PMC
October 2021

Algorithmic Detection of Boolean Logic Errors in Clinical Decision Support Statements.

Appl Clin Inform 2021 01 10;12(1):182-189. Epub 2021 Mar 10.

School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, United States.

Objective: Clinical decision support (CDS) can contribute to quality and safety. Prior work has shown that errors in CDS systems are common and can lead to unintended consequences. Many CDS systems use Boolean logic, which can be difficult for CDS analysts to specify accurately. We set out to determine the prevalence of certain types of Boolean logic errors in CDS statements.

Methods: Nine health care organizations extracted Boolean logic statements from their Epic electronic health record (EHR). We developed an open-source software tool, which implemented the Espresso logic minimization algorithm, to identify three classes of logic errors.

Results: Participating organizations submitted 260,698 logic statements, of which 44,890 were minimized by Espresso. We found errors in 209 of them. Every participating organization had at least two errors, and all organizations reported that they would act on the feedback.

Discussion: An automated algorithm can readily detect specific categories of Boolean CDS logic errors. These errors represent a minority of CDS errors, but very likely require correction to avoid patient safety issues. This process found only a few errors at each site, but the problem appears to be widespread, affecting all participating organizations.

Conclusion: Both CDS implementers and EHR vendors should consider implementing similar algorithms as part of the CDS authoring process to reduce the number of errors in their CDS interventions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1055/s-0041-1722918DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7946596PMC
January 2021

Implementation of a Novel User Interface for Review of Clinical Microbiology Results.

Stud Health Technol Inform 2019 Aug;264:1823-1824

Brigham & Women's Hospital, Boston, MA, USA.

Compared to other laboratory data, microbiology data are a complex mix of quantitative and qualitative results that return iteratively over time. Commercial electronic health records (EHR) frequently have limitations in the manner in which they manage microbiology data, not attempting to codify data but rather displaying it as text. This contributes to time-consuming and error-prone clinical workflows. We developed a microbiology viewer application to aggregate results and implemented it in our EHR.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3233/SHTI190666DOI Listing
August 2019

Continuous Video Recording of Electronic Health Record User Sessions to Support Usability and Safety.

Stud Health Technol Inform 2019 Aug;264:1811-1812

Brigham & Women's Hospital, Boston, MA.

Electronic health records (EHRs) have been shown to improve safety and quality. However, usability and safety issues with EHRs have been reported. The current state of the art in usability testing is to have clinicians conduct simulated activities in a usability lab. In this poster, we describe our experience with continuous recording of real-world EHR use to improve safety and usability.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3233/SHTI190660DOI Listing
August 2019

Structured override reasons for drug-drug interaction alerts in electronic health records.

J Am Med Inform Assoc 2019 10;26(10):934-942

School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA.

Objective: The study sought to determine availability and use of structured override reasons for drug-drug interaction (DDI) alerts in electronic health records.

Materials And Methods: We collected data on DDI alerts and override reasons from 10 clinical sites across the United States using a variety of electronic health records. We used a multistage iterative card sort method to categorize the override reasons from all sites and identified best practices.

Results: Our methodology established 177 unique override reasons across the 10 sites. The number of coded override reasons at each site ranged from 3 to 100. Many sites offered override reasons not relevant to DDIs. Twelve categories of override reasons were identified. Three categories accounted for 78% of all overrides: "will monitor or take precautions," "not clinically significant," and "benefit outweighs risk."

Discussion: We found wide variability in override reasons between sites and many opportunities to improve alerts. Some override reasons were irrelevant to DDIs. Many override reasons attested to a future action (eg, decreasing a dose or ordering monitoring tests), which requires an additional step after the alert is overridden, unless the alert is made actionable. Some override reasons deferred to another party, although override reasons often are not visible to other users. Many override reasons stated that the alert was inaccurate, suggesting that specificity of alerts could be improved.

Conclusions: Organizations should improve the options available to providers who choose to override DDI alerts. DDI alerting systems should be actionable and alerts should be tailored to the patient and drug pairs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocz033DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6748816PMC
October 2019

Cranky comments: detecting clinical decision support malfunctions through free-text override reasons.

J Am Med Inform Assoc 2019 01;26(1):37-43

Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, USA.

Background: Rule-base clinical decision support alerts are known to malfunction, but tools for discovering malfunctions are limited.

Objective: Investigate whether user override comments can be used to discover malfunctions.

Methods: We manually classified all rules in our database with at least 10 override comments into 3 categories based on a sample of override comments: "broken," "not broken, but could be improved," and "not broken." We used 3 methods (frequency of comments, cranky word list heuristic, and a Naïve Bayes classifier trained on a sample of comments) to automatically rank rules based on features of their override comments. We evaluated each ranking using the manual classification as truth.

Results: Of the rules investigated, 62 were broken, 13 could be improved, and the remaining 45 were not broken. Frequency of comments performed worse than a random ranking, with precision at 20 of 8 and AUC = 0.487. The cranky comments heuristic performed better with precision at 20 of 16 and AUC = 0.723. The Naïve Bayes classifier had precision at 20 of 17 and AUC = 0.738.

Discussion: Override comments uncovered malfunctions in 26% of all rules active in our system. This is a lower bound on total malfunctions and much higher than expected. Even for low-resource organizations, reviewing comments identified by the cranky word list heuristic may be an effective and feasible way of finding broken alerts.

Conclusion: Override comments are a rich data source for finding alerts that are broken or could be improved. If possible, we recommend monitoring all override comments on a regular basis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocy139DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6308015PMC
January 2019

Best practices for preventing malfunctions in rule-based clinical decision support alerts and reminders: Results of a Delphi study.

Int J Med Inform 2018 10 2;118:78-85. Epub 2018 Aug 2.

School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States.

Objective: Developing effective and reliable rule-based clinical decision support (CDS) alerts and reminders is challenging. Using a previously developed taxonomy for alert malfunctions, we identified best practices for developing, testing, implementing, and maintaining alerts and avoiding malfunctions.

Materials And Methods: We identified 72 initial practices from the literature, interviews with subject matter experts, and prior research. To refine, enrich, and prioritize the list of practices, we used the Delphi method with two rounds of consensus-building and refinement. We used a larger than normal panel of experts to include a wide representation of CDS subject matter experts from various disciplines.

Results: 28 experts completed Round 1 and 25 completed Round 2. Round 1 narrowed the list to 47 best practices in 7 categories: knowledge management, designing and specifying, building, testing, deployment, monitoring and feedback, and people and governance. Round 2 developed consensus on the importance and feasibility of each best practice.

Discussion: The Delphi panel identified a range of best practices that may help to improve implementation of rule-based CDS and avert malfunctions. Due to limitations on resources and personnel, not everyone can implement all best practices. The most robust processes require investing in a data warehouse. Experts also pointed to the issue of shared responsibility between the healthcare organization and the electronic health record vendor.

Conclusion: These 47 best practices represent an ideal situation. The research identifies the balance between importance and difficulty, highlights the challenges faced by organizations seeking to implement CDS, and describes several opportunities for future research to reduce alert malfunctions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ijmedinf.2018.08.001DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6128667PMC
October 2018

Smashing the strict hierarchy: three cases of clinical decision support malfunctions involving carvedilol.

J Am Med Inform Assoc 2018 11;25(11):1552-1555

Department of Biomedical Informatics, UTHealth - Memorial Hermann Center for Healthcare Quality and Safety, School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA.

Clinical vocabularies allow for standard representation of clinical concepts, and can also contain knowledge structures, such as hierarchy, that facilitate the creation of maintainable and accurate clinical decision support (CDS). A key architectural feature of clinical hierarchies is how they handle parent-child relationships - specifically whether hierarchies are strict hierarchies (allowing a single parent per concept) or polyhierarchies (allowing multiple parents per concept). These structures handle subsumption relationships (ie, ancestor and descendant relationships) differently. In this paper, we describe three real-world malfunctions of clinical decision support related to incorrect assumptions about subsumption checking for β-blocker, specifically carvedilol, a non-selective β-blocker that also has α-blocker activity. We recommend that 1) CDS implementers should learn about the limitations of terminologies, hierarchies, and classification, 2) CDS implementers should thoroughly test CDS, with a focus on special or unusual cases, 3) CDS implementers should monitor feedback from users, and 4) electronic health record (EHR) and clinical content developers should offer and support polyhierarchical clinical terminologies, especially for medications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocy091DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6213087PMC
November 2018

Reduced Effectiveness of Interruptive Drug-Drug Interaction Alerts after Conversion to a Commercial Electronic Health Record.

J Gen Intern Med 2018 11 15;33(11):1868-1876. Epub 2018 May 15.

Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA.

Background: Drug-drug interaction (DDI) alerts in electronic health records (EHRs) can help prevent adverse drug events, but such alerts are frequently overridden, raising concerns about their clinical usefulness and contribution to alert fatigue.

Objective: To study the effect of conversion to a commercial EHR on DDI alert and acceptance rates.

Design: Two before-and-after studies.

Participants: 3277 clinicians who received a DDI alert in the outpatient setting.

Intervention: Introduction of a new, commercial EHR and subsequent adjustment of DDI alerting criteria.

Main Measures: Alert burden and proportion of alerts accepted.

Key Results: Overall interruptive DDI alert burden increased by a factor of 6 from the legacy EHR to the commercial EHR. The acceptance rate for the most severe alerts fell from 100 to 8.4%, and from 29.3 to 7.5% for medium severity alerts (P < 0.001). After disabling the least severe alerts, total DDI alert burden fell by 50.5%, and acceptance of Tier 1 alerts rose from 9.1 to 12.7% (P < 0.01).

Conclusions: Changing from a highly tailored DDI alerting system to a more general one as part of an EHR conversion decreased acceptance of DDI alerts and increased alert burden on users. The decrease in acceptance rates cannot be fully explained by differences in the clinical knowledge base, nor can it be fully explained by alert fatigue associated with increased alert burden. Instead, workflow factors probably predominate, including timing of alerts in the prescribing process, lack of differentiation of more and less severe alerts, and features of how users interact with alerts.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11606-018-4415-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6206354PMC
November 2018

Using statistical anomaly detection models to find clinical decision support malfunctions.

J Am Med Inform Assoc 2018 07;25(7):862-871

Department of General Internal Medicine and Primary Care, Brigham & Women's Hospital, Boston, MA, USA.

Objective: Malfunctions in Clinical Decision Support (CDS) systems occur due to a multitude of reasons, and often go unnoticed, leading to potentially poor outcomes. Our goal was to identify malfunctions within CDS systems.

Methods: We evaluated 6 anomaly detection models: (1) Poisson Changepoint Model, (2) Autoregressive Integrated Moving Average (ARIMA) Model, (3) Hierarchical Divisive Changepoint (HDC) Model, (4) Bayesian Changepoint Model, (5) Seasonal Hybrid Extreme Studentized Deviate (SHESD) Model, and (6) E-Divisive with Median (EDM) Model and characterized their ability to find known anomalies. We analyzed 4 CDS alerts with known malfunctions from the Longitudinal Medical Record (LMR) and Epic® (Epic Systems Corporation, Madison, WI, USA) at Brigham and Women's Hospital, Boston, MA. The 4 rules recommend lead testing in children, aspirin therapy in patients with coronary artery disease, pneumococcal vaccination in immunocompromised adults and thyroid testing in patients taking amiodarone.

Results: Poisson changepoint, ARIMA, HDC, Bayesian changepoint and the SHESD model were able to detect anomalies in an alert for lead screening in children and in an alert for pneumococcal conjugate vaccine in immunocompromised adults. EDM was able to detect anomalies in an alert for monitoring thyroid function in patients on amiodarone.

Conclusions: Malfunctions/anomalies occur frequently in CDS alert systems. It is important to be able to detect such anomalies promptly. Anomaly detection models are useful tools to aid such detections.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocy041DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6016695PMC
July 2018

Development and evaluation of a novel user interface for reviewing clinical microbiology results.

J Am Med Inform Assoc 2018 08;25(8):1064-1068

Division of General Internal Medicine and Primary Care, Brigham & Women's Hospital, Boston, MA, 02115, USA.

Background: Microbiology laboratory results are complex and cumbersome to review. We sought to develop a new review tool to improve the ease and accuracy of microbiology results review.

Methods: We observed and informally interviewed clinicians to determine areas in which existing microbiology review tools were lacking. We developed a new tool that reorganizes microbiology results by time and organism. We conducted a scenario-based usability evaluation to compare the new tool to existing legacy tools, using a balanced block design.

Results: The average time-on-task decreased from 45.3 min for the legacy tools to 27.1 min for the new tool (P < .0001). Total errors decreased from 41 with the legacy tools to 19 with the new tool (P = .0068). The average Single Ease Question score was 5.65 (out of 7) for the new tool, compared to 3.78 for the legacy tools (P < .0001). The new tool scored 88 ("Excellent") on the System Usability Scale.

Conclusions: The new tool substantially improved efficiency, accuracy, and usability. It was subsequently integrated into the electronic health record and rolled out system-wide. This project provides an example of how clinical and informatics teams can innovative alongside a commercial Electronic Health Record (EHR).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocy014DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7646871PMC
August 2018

Changes in hospital bond ratings after the transition to a new electronic health record.

J Am Med Inform Assoc 2018 05;25(5):572-574

Brigham and Women's Hospital, Boston, MA, USA.

Objective: To assess the impact of electronic health record (EHR) implementation on hospital finances.

Materials And Methods: We analyzed the impact of EHR implementation on bond ratings and net income from service to patients (NISP) at 32 hospitals that recently implemented a new EHR and a set of controls.

Results: After implementing an EHR, 7 hospitals had a bond downgrade, 7 had a bond upgrade, and 18 had no changes. There was no difference in the likelihood of bond rating changes or in changes to NISP following EHR go-live when compared to control hospitals.

Discussion: Most hospitals in our analysis saw no change in bond ratings following EHR go-live, with no significant differences observed between EHR implementation and control hospitals. There was also no apparent difference in NISP.

Conclusions: Implementation of an EHR did not appear to have an impact on bond ratings at the hospitals in our analysis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocy007DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7647018PMC
May 2018

Methods for Detecting Malfunctions in Clinical Decision Support Systems.

Stud Health Technol Inform 2017 ;245:1385

Brigham and Women's Hospital, Boston, MA.

Clinical decision support systems, when used effectively, can improve the quality of care. However, such systems can malfunction, and these malfunctions can be difficult to detect. In this poster, we describe four methods of detecting and resolving issues with clinical decision support: 1) statistical anomaly detection, 2) visual analytics and dashboards, 3) user feedback analysis, 4) taxonomization of failure modes/effects.
View Article and Find Full Text PDF

Download full-text PDF

Source
June 2018

Clinical decision support alert malfunctions: analysis and empirically derived taxonomy.

J Am Med Inform Assoc 2018 05;25(5):496-506

Department of Biomedical Informatics, University of Texas Health Science Center at Houston, TX, USA.

Objective: To develop an empirically derived taxonomy of clinical decision support (CDS) alert malfunctions.

Materials And Methods: We identified CDS alert malfunctions using a mix of qualitative and quantitative methods: (1) site visits with interviews of chief medical informatics officers, CDS developers, clinical leaders, and CDS end users; (2) surveys of chief medical informatics officers; (3) analysis of CDS firing rates; and (4) analysis of CDS overrides. We used a multi-round, manual, iterative card sort to develop a multi-axial, empirically derived taxonomy of CDS malfunctions.

Results: We analyzed 68 CDS alert malfunction cases from 14 sites across the United States with diverse electronic health record systems. Four primary axes emerged: the cause of the malfunction, its mode of discovery, when it began, and how it affected rule firing. Build errors, conceptualization errors, and the introduction of new concepts or terms were the most frequent causes. User reports were the predominant mode of discovery. Many malfunctions within our database caused rules to fire for patients for whom they should not have (false positives), but the reverse (false negatives) was also common.

Discussion: Across organizations and electronic health record systems, similar malfunction patterns recurred. Challenges included updates to code sets and values, software issues at the time of system upgrades, difficulties with migration of CDS content between computing environments, and the challenge of correctly conceptualizing and building CDS.

Conclusion: CDS alert malfunctions are frequent. The empirically derived taxonomy formalizes the common recurring issues that cause these malfunctions, helping CDS developers anticipate and prevent CDS malfunctions before they occur or detect and resolve them expediently.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocx106DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6019061PMC
May 2018

Variation in high-priority drug-drug interaction alerts across institutions and electronic health records.

J Am Med Inform Assoc 2017 03;24(2):331-338

Partners Healthcare, Wellesley, Massachusetts, USA.

Objective: The United States Office of the National Coordinator for Health Information Technology sponsored the development of a "high-priority" list of drug-drug interactions (DDIs) to be used for clinical decision support. We assessed current adoption of this list and current alerting practice for these DDIs with regard to alert implementation (presence or absence of an alert) and display (alert appearance as interruptive or passive).

Materials And Methods: We conducted evaluations of electronic health records (EHRs) at a convenience sample of health care organizations across the United States using a standardized testing protocol with simulated orders.

Results: Evaluations of 19 systems were conducted at 13 sites using 14 different EHRs. Across systems, 69% of the high-priority DDI pairs produced alerts. Implementation and display of the DDI alerts tested varied between systems, even when the same EHR vendor was used. Across the drug pairs evaluated, implementation and display of DDI alerts differed, ranging from 27% (4/15) to 93% (14/15) implementation.

Discussion: Currently, there is no standard of care covering which DDI alerts to implement or how to display them to providers. Opportunities to improve DDI alerting include using differential displays based on DDI severity, establishing improved lists of clinically significant DDIs, and thoroughly reviewing organizational implementation decisions regarding DDIs.

Conclusion: DDI alerting is clinically important but not standardized. There is significant room for improvement and standardization around evidence-based DDIs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocw114DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5391726PMC
March 2017

The Big Phish: Cyberattacks Against U.S. Healthcare Systems.

J Gen Intern Med 2016 10;31(10):1115-8

Brigham and Women's Hospital, 1620 Tremont St., Boston, MA, 02115, USA.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11606-016-3741-zDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5023604PMC
October 2016

Testing electronic health records in the "production" environment: an essential step in the journey to a safe and effective health care system.

J Am Med Inform Assoc 2017 01 23;24(1):188-192. Epub 2016 Apr 23.

University of Texas Health Science Center, University of Texas, Houston, TX, USA.

Thorough and ongoing testing of electronic health records (EHRs) is key to ensuring their safety and effectiveness. Many health care organizations limit testing to test environments separate from, and often different than, the production environment used by clinicians. Because EHRs are complex hardware and software systems that often interact with other hardware and software systems, no test environment can exactly mimic how the production environment will behave. An effective testing process must integrate safely conducted testing in the production environment itself, using test patients. We propose recommendations for how to safely incorporate testing in production into current EHR testing practices, with suggestions regarding the incremental release of upgrades, test patients, tester accounts, downstream personnel, and reporting.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocw039DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5201179PMC
January 2017

Analysis of clinical decision support system malfunctions: a case series and survey.

J Am Med Inform Assoc 2016 11 28;23(6):1068-1076. Epub 2016 Mar 28.

Brigham & Women's Hospital, Boston, MA, USA.

Objective: To illustrate ways in which clinical decision support systems (CDSSs) malfunction and identify patterns of such malfunctions.

Materials And Methods: We identified and investigated several CDSS malfunctions at Brigham and Women's Hospital and present them as a case series. We also conducted a preliminary survey of Chief Medical Information Officers to assess the frequency of such malfunctions.

Results: We identified four CDSS malfunctions at Brigham and Women's Hospital: (1) an alert for monitoring thyroid function in patients receiving amiodarone stopped working when an internal identifier for amiodarone was changed in another system; (2) an alert for lead screening for children stopped working when the rule was inadvertently edited; (3) a software upgrade of the electronic health record software caused numerous spurious alerts to fire; and (4) a malfunction in an external drug classification system caused an alert to inappropriately suggest antiplatelet drugs, such as aspirin, for patients already taking one. We found that 93% of the Chief Medical Information Officers who responded to our survey had experienced at least one CDSS malfunction, and two-thirds experienced malfunctions at least annually.

Discussion: CDSS malfunctions are widespread and often persist for long periods. The failure of alerts to fire is particularly difficult to detect. A range of causes, including changes in codes and fields, software upgrades, inadvertent disabling or editing of rules, and malfunctions of external systems commonly contribute to CDSS malfunctions, and current approaches for preventing and detecting such malfunctions are inadequate.

Conclusion: CDSS malfunctions occur commonly and often go undetected. Better methods are needed to prevent and detect these malfunctions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocw005DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5070518PMC
November 2016
-->