Publications by authors named "Hadi Kharrazi"

80 Publications

The PsyTAR dataset: From patients generated narratives to a corpus of adverse drug events and effectiveness of psychiatric medications.

Data Brief 2019 Jun 15;24:103838. Epub 2019 Mar 15.

College of Letters and Science, University of Wisconsin Milwaukee, WI, United States.

The "Psychiatric Treatment Adverse Reactions" (PsyTAR) dataset contains patients' expression of effectiveness and adverse drug events associated with psychiatric medications. The PsyTAR was generated in four phases. In the first phase, a sample of 891 drugs reviews posted by patients on an online healthcare forum, "askapatient.com", was collected for four psychiatric drugs: Zoloft, Lexapro, Cymbalta, and Effexor XR. For each drug review, patient demographic information, duration of treatment, and satisfaction with the drugs were reported. In the second phase, sentence classification, drug reviews were split to 6009 sentences, and each sentence was labeled for the presence of Adverse Drug Reaction (ADR), Withdrawal Symptoms (WDs), Sign/Symptoms/Illness (SSIs), Drug Indications (DIs), Drug Effectiveness (EF), Drug Infectiveness (INF), and Others (not applicable). In the third phases, entities including ADRs (4813 mentions), WDs (590 mentions), SSIs (1219 mentions), and DIs (792 mentions) were identified and extracted from the sentences. In the four phases, all the identified entities were mapped to the corresponding UMLS Metathesaurus concepts (916) and SNOMED CT concepts (755). In this phase, qualifiers representing severity and persistency of ADRs, WDs, SSIs, and DIs (e.g., mild, short term) were identified. All sentences and identified entities were linked to the original post using IDs (e.g., Zoloft.1, Effexor.29, Cymbalta.31). The PsyTAR dataset can be accessed via Online Supplement #1 under the CC BY 4.0 Data license. The updated versions of the dataset would also be accessible in https://sites.google.com/view/pharmacovigilanceinpsychiatry/home.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.dib.2019.103838DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6495095PMC
June 2019

The Impact of Social Determinants of Health on Hospitalization in the Veterans Health Administration.

Am J Prev Med 2019 06 17;56(6):811-818. Epub 2019 Apr 17.

Center for Population Health IT, Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland.

Introduction: This study aims to assess the effect of individual and geographic-level social determinants of health on risk of hospitalization in the Veterans Health Administration primary care clinics known as the Patient Aligned Care Team.

Methods: For a population of Veterans enrolled in the primary care clinics, the study team extracted patient-level characteristics and healthcare utilization records from 2015 Veterans Health Administration electronic health record data. They also collected census data on social determinants of health factors for all U.S. census tracts. They used generalized estimating equation modeling and a spatial-based GIS analysis to assess the role of key social determinants of health on hospitalization. Data analysis was completed in 2018.

Results: A total of 6.63% of the Veterans Health Administration population was hospitalized during 2015. Most of the hospitalized patients were male (93.40%) and white (68.80%); the mean age was 64.5 years. In the generalized estimating equation model, white Veterans had a 15% decreased odds of hospitalization compared with non-white Veterans. After controlling for patient-level characteristics, Veterans residing in census tracts with the higher neighborhood SES index experienced decreased odds of hospitalization. A spatial-based analysis presented variations in the hospitalization rate across the Veterans Health Administration primary care clinics and identified the clinic sites with an elevated risk of hospitalization (hotspots) compared with other clinics across the country.

Conclusions: By linking patient and population-level data at a geographic level, social determinants of health assessments can help with designing population health interventions and identifying features leading to potentially unnecessary hospitalization in selected geographic areas that appear to be outliers.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.amepre.2018.12.012DOI Listing
June 2019

Evaluation of multidisciplinary collaboration in pediatric trauma care using EHR data.

J Am Med Inform Assoc 2019 06;26(6):506-515

Division of Health Sciences Informatics, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA.

Objectives: The study sought to identify collaborative electronic health record (EHR) usage patterns for pediatric trauma patients and determine how the usage patterns are related to patient outcomes.

Materials And Methods: A process mining-based network analysis was applied to EHR metadata and trauma registry data for a cohort of pediatric trauma patients with minor injuries at a Level I pediatric trauma center. The EHR metadata were processed into an event log that was segmented based on gaps in the temporal continuity of events. A usage pattern was constructed for each encounter by creating edges among functional roles that were captured within the same event log segment. These patterns were classified into groups using graph kernel and unsupervised spectral clustering methods. Demographics, clinical and network characteristics, and emergency department (ED) length of stay (LOS) of the groups were compared.

Results: Three distinct usage patterns that differed by network density were discovered: fully connected (clique), partially connected, and disconnected (isolated). Compared with the fully connected pattern, encounters with the partially connected pattern had an adjusted median ED LOS that was significantly longer (242.6 [95% confidence interval, 236.9-246.0] minutes vs 295.2 [95% confidence, 289.2-297.8] minutes), more frequently seen among day shift and weekday arrivals, and involved otolaryngology, ophthalmology services, and child life specialists.

Discussion: The clique-like usage pattern was associated with decreased ED LOS for the study cohort, suggesting greater degree of collaboration resulted in shorter stay.

Conclusions: Further investigation to understand and address causal factors can lead to improvement in multidisciplinary collaboration.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jamia/ocy184DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6515526PMC
June 2019

Linking Electronic Health Record and Trauma Registry Data: Assessing the Value of Probabilistic Linkage.

Methods Inf Med 2018 11 15;57(5-06):261-269. Epub 2019 Mar 15.

Center for Health Care Human Factors, Armstrong Institute for Patient Safety and Quality, Johns Hopkins Medicine, Johns Hopkins University, Baltimore, Maryland, United States.

Background: Electronic health record (EHR) systems contain large volumes of novel heterogeneous data that can be linked to trauma registry data to enable innovative research not possible with either data source alone.

Objective: This article describes an approach for linking electronically extracted EHR data to trauma registry data at the institutional level and assesses the value of probabilistic linkage.

Methods: Encounter data were independently obtained from the EHR data warehouse ( = 1,632) and the pediatric trauma registry ( = 1,829) at a Level I pediatric trauma center. Deterministic linkage was attempted using nine different combinations of medical record number (MRN), encounter identity (ID) (visit ID), age, gender, and emergency department (ED) arrival date. True matches from the best performing variable combination were used to create a gold standard, which was used to evaluate the performance of each variable combination, and to train a probabilistic algorithm that was separately used to link records unmatched by deterministic linkage and the entire cohort. Additional records that matched probabilistically were investigated via chart review and compared against records that matched deterministically.

Results: Deterministic linkage with exact matching on any three of MRN, encounter ID, age, gender, and ED arrival date gave the best yield of 1,276 true matches while an additional probabilistic linkage step following deterministic linkage yielded 110 true matches. These records contained a significantly higher number of boys compared to records that matched deterministically and etiology was attributable to mismatch between MRNs in the two data sets. Probabilistic linkage of the entire cohort yielded 1,363 true matches.

Conclusion: The combination of deterministic and an additional probabilistic method represents a robust approach for linking EHR data to trauma registry data. This approach may be generalizable to studies involving other registries and databases.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1055/s-0039-1681087DOI Listing
November 2018

Extraction of Geriatric Syndromes From Electronic Health Record Clinical Notes: Assessment of Statistical Natural Language Processing Methods.

JMIR Med Inform 2019 Mar 26;7(1):e13039. Epub 2019 Mar 26.

Center for Population Health IT, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, United States.

Background: Geriatric syndromes in older adults are associated with adverse outcomes. However, despite being reported in clinical notes, these syndromes are often poorly captured by diagnostic codes in the structured fields of electronic health records (EHRs) or administrative records.

Objective: We aim to automatically determine if a patient has any geriatric syndromes by mining the free text of associated EHR clinical notes. We assessed which statistical natural language processing (NLP) techniques are most effective.

Methods: We applied conditional random fields (CRFs), a widely used machine learning algorithm, to identify each of 10 geriatric syndrome constructs in a clinical note. We assessed three sets of features and attributes for CRF operations: a base set, enhanced token, and contextual features. We trained the CRF on 3901 manually annotated notes from 85 patients, tuned the CRF on a validation set of 50 patients, and evaluated it on 50 held-out test patients. These notes were from a group of US Medicare patients over 65 years of age enrolled in a Medicare Advantage Health Maintenance Organization and cared for by a large group practice in Massachusetts.

Results: A final feature set was formed through comprehensive feature ablation experiments. The final CRF model performed well at patient-level determination (macroaverage F1=0.834, microaverage F1=0.851); however, performance varied by construct. For example, at phrase-partial evaluation, the CRF model worked well on constructs such as absence of fecal control (F1=0.857) and vision impairment (F1=0.798) but poorly on malnutrition (F1=0.155), weight loss (F1=0.394), and severe urinary control issues (F1=0.532). Errors were primarily due to previously unobserved words (ie, out-of-vocabulary) and a lack of context.

Conclusions: This study shows that statistical NLP can be used to identify geriatric syndromes from EHR-extracted clinical notes. This creates new opportunities to identify patients with geriatric syndromes and study their health outcomes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2196/13039DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6454337PMC
March 2019

Exploring the use of machine learning for risk adjustment: A comparison of standard and penalized linear regression models in predicting health care costs in older adults.

PLoS One 2019 6;14(3):e0213258. Epub 2019 Mar 6.

Center for Population Health IT, Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, United States of America.

Background: Payers and providers still primarily use ordinary least squares (OLS) to estimate expected economic and clinical outcomes for risk adjustment purposes. Penalized linear regression represents a practical and incremental step forward that provides transparency and interpretability within the familiar regression framework. This study conducted an in-depth comparison of prediction performance of standard and penalized linear regression in predicting future health care costs in older adults.

Methods And Findings: This retrospective cohort study included 81,106 Medicare Advantage patients with 5 years of continuous medical and pharmacy insurance from 2009 to 2013. Total health care costs in 2013 were predicted with comorbidity indicators from 2009 to 2012. Using 2012 predictors only, OLS performed poorly (e.g., R2 = 16.3%) compared to penalized linear regression models (R2 ranging from 16.8 to 16.9%); using 2009-2012 predictors, the gap in prediction performance increased (R2:15.0% versus 18.0-18.2%). OLS with a reduced set of predictors selected by lasso showed improved performance (R2 = 16.6% with 2012 predictors, 17.4% with 2009-2012 predictors) relative to OLS without variable selection but still lagged behind the prediction performance of penalized regression. Lasso regression consistently generated prediction ratios closer to 1 across different levels of predicted risk compared to other models.

Conclusions: This study demonstrated the advantages of using transparent and easy-to-interpret penalized linear regression for predicting future health care costs in older adults relative to standard linear regression. Penalized regression showed better performance than OLS in predicting health care costs. Applying penalized regression to longitudinal data increased prediction accuracy. Lasso regression in particular showed superior prediction ratios across low and high levels of predicted risk. Health care insurers, providers and policy makers may benefit from adopting penalized regression such as lasso regression for cost prediction to improve risk adjustment and population health management and thus better address the underlying needs and risk of the populations they serve.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0213258PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6402678PMC
December 2019

Clinical Concept Value Sets and Interoperability in Health Data Analytics.

AMIA Annu Symp Proc 2018 5;2018:480-489. Epub 2018 Dec 5.

University of North Carolina, Chapel Hill, NC.

This paper focuses on as an essential component in the health analytics ecosystem. We discuss shared repositories of reusable value sets and offer recommendations for their further development and adoption. In order to motivate these contributions, we explain how value sets fit into specific analytic tasks and the health analytics landscape more broadly; their growing importance and ubiquity with the advent of Common Data Models, Distributed Research Networks, and the availability of higher order, reusable analytic resources like electronic phenotypes and electronic clinical quality measures; the formidable barriers to value set reuse; and our introduction of a concept-agnostic orientation to vocabulary collections. The costs of ad hoc value set management and the benefits of value set reuse are described or implied throughout. Our standards, infrastructure, and design recommendations are not systematic or comprehensive but invite further work to support value set reuse for health analytics. .
View Article and Find Full Text PDF

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6371254PMC
September 2019

A public health perspective on using electronic health records to address social determinants of health: The potential for a national system of local community health records in the United States.

Int J Med Inform 2019 04 24;124:86-89. Epub 2019 Jan 24.

Center for Population Health IT, Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA.

Community health records (CHRs) are defined as "a curated set of population-level indicators that describe the health and quality of life of a geographic community". CHRs encompass clinical, social determinants of health (SDOH), and public health data aggregated at the neighborhood level. If developed and deployed across communities, CHRs provide an opportunity to track and enhance population health on a regional or national level. Electronic Health Records (EHRs), if linked across providers, can document certain indicators of SDOH in addition to capturing clinical data for residents of a community. Moreover, EHR-derived patient-level SDOH information could be collated with geographic level public health and social services information to create the basis for neighborhood-specific CHRs. An EHR-derived CHR - relative to current survey-based assessments used by public health agencies in the United States and other countries - could dramatically increase the scope, quality, and timeliness of data available for planning interventions targeted at SDOH factors at both the consumer and small-area levels. EHR-derived CHRs, if assembled across neighborhoods, could also offer a significant value to the society by providing population-level SDOH data across various regions and eventually nationwide.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ijmedinf.2019.01.012DOI Listing
April 2019

A systematic approach for developing a corpus of patient reported adverse drug events: A case study for SSRI and SNRI medications.

J Biomed Inform 2019 02 4;90:103091. Epub 2019 Jan 4.

School of Computing and Engineering, University of Missouri-Kansas, Kansas City, MO, United States.

"Psychiatric Treatment Adverse Reactions" (PsyTAR) corpus is an annotated corpus that has been developed using patients narrative data for psychiatric medications, particularly SSRIs (Selective Serotonin Reuptake Inhibitor) and SNRIs (Serotonin Norepinephrine Reuptake Inhibitor) medications. This corpus consists of three main components: sentence classification, entity identification, and entity normalization. We split the review posts into sentences and labeled them for presence of adverse drug reactions (ADRs) (2168 sentences), withdrawal symptoms (WDs) (438 sentences), sign/symptoms/illness (SSIs) (789 sentences), drug indications (517), drug effectiveness (EF) (1087 sentences), and drug infectiveness (INF) (337 sentences). In the entity identification phase, we identified and extracted ADRs (4813 mentions), WDs (590 mentions), SSIs (1219 mentions), and DIs (792). In the entity normalization phase, we mapped the identified entities to the corresponding concepts in both UMLS (918 unique concepts) and SNOMED CT (755 unique concepts). Four annotators double coded the sentences and the span of identified entities by strictly following guidelines rules developed for this study. We used the PsyTAR sentence classification component to automatically train a range of supervised machine learning classifiers to identifying text segments with the mentions of ADRs, WDs, DIs, SSIs, EF, and INF. SVMs classifiers had the highest performance with F-Score 0.90. We also measured performance of the cTAKES (clinical Text Analysis and Knowledge Extraction System) in identifying patients' expressions of ADRs and WDs with and without adding PsyTAR dictionary to the core dictionary of cTAKES. Augmenting cTAKES dictionary with PsyTAR improved the F-score cTAKES by 25%. The findings imply that PsyTAR has significant implications for text mining algorithms aimed to identify information about adverse drug events and drug effectiveness from patients' narratives data, by linking the patients' expressions of adverse drug events to medical standard vocabularies. The corpus is publicly available at Zolnoori et al. [30].
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jbi.2018.12.005DOI Listing
February 2019

Linking Electronic Health Record and Trauma Registry Data: Assessing the Value of Probabilistic Linkage.

Methods Inf Med 2018 11 19;57(5-06):e3. Epub 2018 Nov 19.

Center for Health Care Human Factors, Armstrong Institute for Patient Safety and Quality, Johns Hopkins Medicine, Johns Hopkins University, Baltimore, Maryland, United States.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1055/s-0038-1675220DOI Listing
November 2018

Assessing the Impact of Body Mass Index Information on the Performance of Risk Adjustment Models in Predicting Health Care Costs and Utilization.

Med Care 2018 12;56(12):1042-1050

Department of Medicine, Division of General Internal Medicine, Johns Hopkins University School of Medicine.

Background: Using electronic health records (EHRs) for population risk stratification has gained attention in recent years. Compared with insurance claims, EHRs offer novel data types (eg, vital signs) that can potentially improve population-based predictive models of cost and utilization.

Objective: To evaluate whether EHR-extracted body mass index (BMI) improves the performance of diagnosis-based models to predict concurrent and prospective health care costs and utilization.

Methods: We used claims and EHR data over a 2-year period from a cohort of continuously insured patients (aged 20-64 y) within an integrated health system. We examined the addition of BMI to 3 diagnosis-based models of increasing comprehensiveness (ie, demographics, Charlson, and Dx-PM model of the Adjusted Clinical Group system) to predict concurrent and prospective costs and utilization, and compared the performance of models with and without BMI.

Results: The study population included 59,849 patients, 57% female, with BMI class I, II, and III comprising 19%, 9%, and 6% of the population. Among demographic models, R improvement from adding BMI ranged from 61% (ie, R increased from 0.56 to 0.90) for prospective pharmacy cost to 29% (1.24-1.60) for concurrent medical cost. Adding BMI to demographic models improved the prediction of all binary service-linked outcomes (ie, hospitalization, emergency department admission, and being in top 5% total costs) with area under the curve increasing from 2% (0.602-0.617) to 7% (0.516-0.554). Adding BMI to Charlson models only improved total and medical cost predictions prospectively (13% and 15%; 4.23-4.79 and 3.30-3.79), and also improved predicting all prospective outcomes with area under the curve increasing from 3% (0.649-0.668) to 4% (0.639-0.665; and, 0.556-0.576). No improvements in prediction were seen in the most comprehensive model (ie, Dx-PM).

Discussion: EHR-extracted BMI levels can be used to enhance predictive models of utilization especially if comprehensive diagnostic data are missing.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/MLR.0000000000001001DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6231962PMC
December 2018

Identifying the Underlying Factors Associated With Patients' Attitudes Toward Antidepressants: Qualitative and Quantitative Analysis of Patient Drug Reviews.

JMIR Ment Health 2018 Sep 30;5(4):e10726. Epub 2018 Sep 30.

Industrial and Manufacturing Engineering, College of Engineering & Applied Science, University of Wisconsin-Milwaukee, Milwaukee, WI, United States.

Background: Nonadherence to antidepressants is a major obstacle to deriving antidepressants' therapeutic benefits, resulting in significant burdens on the individuals and the health care system. Several studies have shown that nonadherence is weakly associated with personal and clinical variables but strongly associated with patients' beliefs and attitudes toward medications. Patients' drug review posts in online health care communities might provide a significant insight into patients' attitude toward antidepressants and could be used to address the challenges of self-report methods such as patients' recruitment.

Objective: The aim of this study was to use patient-generated data to identify factors affecting the patient's attitude toward 4 antidepressants drugs (sertraline [Zoloft], escitalopram [Lexapro], duloxetine [Cymbalta], and venlafaxine [Effexor XR]), which in turn, is a strong determinant of treatment nonadherence. We hypothesized that clinical variables (drug effectiveness; adverse drug reactions, ADRs; perceived distress from ADRs, ADR-PD; and duration of treatment) and personal variables (age, gender, and patients' knowledge about medications) are associated with patients' attitude toward antidepressants, and experience of ADRs and drug ineffectiveness are strongly associated with negative attitude.

Methods: We used both qualitative and quantitative methods to analyze the dataset. Patients' drug reviews were randomly selected from a health care forum called askapatient. The Framework method was used to build the analytical framework containing the themes for developing structured data from the qualitative drug reviews. Then, 4 annotators coded the drug reviews at the sentence level using the analytical framework. After managing missing values, we used chi-square and ordinal logistic regression to test and model the association between variables and attitude.

Results: A total of 892 reviews posted between February 2001 and September 2016 were analyzed. Most of the patients were females (680/892, 76.2%) and aged less than 40 years (540/892, 60.5%). Patient attitude was significantly (P<.001) associated with experience of ADRs, ADR-PD, drug effectiveness, perceived lack of knowledge, experience of withdrawal, and duration of usage, whereas oth age (F=0.72, P=.58) and gender (χ=2.7, P=.21) were not found to be associated with patient attitudes. Moreover, modeling the relationship between variables and attitudes showed that drug effectiveness and perceived distress from adverse drug reactions were the 2 most significant factors affecting patients' attitude toward antidepressants.

Conclusions: Patients' self-report experiences of medications in online health care communities can provide a direct insight into the underlying factors associated with patients' perceptions and attitudes toward antidepressants. However, it cannot be used as a replacement for self-report methods because of the lack of information for some of the variables, colloquial language, and the unstructured format of the reports.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2196/10726DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6876546PMC
September 2018

Effectiveness of Policies and Programs to Combat Adult Obesity: a Systematic Review.

J Gen Intern Med 2018 11 11;33(11):1990-2001. Epub 2018 Sep 11.

Division of General Internal Medicine, Johns Hopkins University School of Medicine, Baltimore, MD, USA.

Background: This systematic review identifies programs, policies, and built-environment changes targeting prevention and control of adult obesity and evaluates their effectiveness.

Methods: We searched PubMed, CINAHL, PsycINFO, and EconLit from January 2000 to March 2018. We included natural experiment studies evaluating a program, policy, or built-environment change targeting adult obesity and reporting weight/body mass index (BMI). Studies were categorized by primary intervention target: physical activity/built environment, food/beverage, messaging, or multiple. Two reviewers independently assessed the risk of bias for each study using the Effective Public Health Practice Project tool.

Results: Of 158 natural experiments targeting obesity, 17 reported adult weight/BMI outcomes. Four of 9 studies reporting on physical activity/built environment demonstrated reduced weight/BMI, although effect sizes were small with low strength of evidence and high risk of bias. None of the 5 studies targeting the food/beverage environment decreased weight/BMI; strength of evidence was low, and 2 studies were rated high risk of bias.

Discussion: We identified few natural experiments reporting on the effectiveness of programs, policies, and built-environment changes on adult obesity. Overall, we found no evidence that policies intending to promote physical activity and healthy eating had beneficial effects on weight/BMI and most studies had a high risk of bias. Limitations include few studies met our inclusion criteria; excluded studies in children and those not reporting on weight/BMI outcomes; weight/BMI reporting was very heterogeneous. More high-quality research, including natural experiments studies, is critical for informing the population-level effectiveness of obesity prevention and control initiatives in adults.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11606-018-4619-zDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6206360PMC
November 2018

Public and Population Health Informatics: The Bridging of Big Data to Benefit Communities.

Yearb Med Inform 2018 Aug 29;27(1):199-206. Epub 2018 Aug 29.

Center for Population Health Information Technology, Johns Hopkins Bloomberg School of Public Health, Baltimore, USA.

Objective:  To summarize the recent public and population health informatics literature with a focus on the synergistic "bridging" of electronic data to benefit communities and other populations.

Methods:  The review was primarily driven by a search of the literature from July 1, 2016 to September 30, 2017. The search included articles indexed in PubMed using subject headings with (MeSH) keywords "public health informatics" and "social determinants of health". The "social determinants of health" search was refined to include articles that contained the keywords "public health", "population health" or "surveillance".

Results:  Several categories were observed in the review focusing on public health's socio-technical infrastructure: evaluation of surveillance practices, surveillance methods, interoperable health information infrastructure, mobile health, social media, and population health. Common trends discussing socio-technical infrastructure included big data platforms, social determinants of health, geographical information systems, novel data sources, and new visualization techniques. A common thread connected these categories of workforce, governance, and sustainability: using clinical resources and data to bridge public and population health.

Conclusions:  Both medical care providers and public health agencies are increasingly using informatics and big data tools to create and share digital information. The intent of this "bridging" is to proactively identify, monitor, and improve a range of medical, environmental, and social factors relevant to the health of communities. These efforts show a significant growth in a range of population health-centric information exchange and analytics activities.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1055/s-0038-1667081DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6115205PMC
August 2018

Public and Population Health Informatics: The Bridging of Big Data to Benefit Communities.

Yearb Med Inform 2018 Aug 29;27(1):199-206. Epub 2018 Aug 29.

Center for Population Health Information Technology, Johns Hopkins Bloomberg School of Public Health, Baltimore, USA.

Objective:  To summarize the recent public and population health informatics literature with a focus on the synergistic "bridging" of electronic data to benefit communities and other populations.

Methods:  The review was primarily driven by a search of the literature from July 1, 2016 to September 30, 2017. The search included articles indexed in PubMed using subject headings with (MeSH) keywords "public health informatics" and "social determinants of health". The "social determinants of health" search was refined to include articles that contained the keywords "public health", "population health" or "surveillance".

Results:  Several categories were observed in the review focusing on public health's socio-technical infrastructure: evaluation of surveillance practices, surveillance methods, interoperable health information infrastructure, mobile health, social media, and population health. Common trends discussing socio-technical infrastructure included big data platforms, social determinants of health, geographical information systems, novel data sources, and new visualization techniques. A common thread connected these categories of workforce, governance, and sustainability: using clinical resources and data to bridge public and population health.

Conclusions:  Both medical care providers and public health agencies are increasingly using informatics and big data tools to create and share digital information. The intent of this "bridging" is to proactively identify, monitor, and improve a range of medical, environmental, and social factors relevant to the health of communities. These efforts show a significant growth in a range of population health-centric information exchange and analytics activities.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1055/s-0038-1667081DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6115205PMC
August 2018

Forecasting the Maturation of Electronic Health Record Functions Among US Hospitals: Retrospective Analysis and Predictive Model.

J Med Internet Res 2018 08 7;20(8):e10458. Epub 2018 Aug 7.

Department of Health Care Organization and Policy, School of Public Health, University of Alabama Birmingham, Birmingham, AL, United States.

Background: The Meaningful Use (MU) program has promoted electronic health record adoption among US hospitals. Studies have shown that electronic health record adoption has been slower than desired in certain types of hospitals; but generally, the overall adoption rate has increased among hospitals. However, these studies have neither evaluated the adoption of advanced functionalities of electronic health records (beyond MU) nor forecasted electronic health record maturation over an extended period in a holistic fashion. Additional research is needed to prospectively assess US hospitals' electronic health record technology adoption and advancement patterns.

Objective: This study forecasts the maturation of electronic health record functionality adoption among US hospitals through 2035.

Methods: The Healthcare Information and Management Systems Society (HIMSS) Analytics' Electronic Medical Record Adoption Model (EMRAM) dataset was used to track historic uptakes of various electronic health record functionalities considered critical to improving health care quality and efficiency in hospitals. The Bass model was used to predict the technological diffusion rates for repeated electronic health record adoptions where upgrades undergo rapid technological improvements. The forecast used EMRAM data from 2006 to 2014 to estimate adoption levels to the year 2035.

Results: In 2014, over 5400 hospitals completed HIMSS' annual EMRAM survey (86%+ of total US hospitals). In 2006, the majority of the US hospitals were in EMRAM Stages 0, 1, and 2. By 2014, most hospitals had achieved Stages 3, 4, and 5. The overall technology diffusion model (ie, the Bass model) reached an adjusted R-squared of .91. The final forecast depicted differing trends for each of the EMRAM stages. In 2006, the first year of observation, peaks of Stages 0 and 1 were shown as electronic health record adoption predates HIMSS' EMRAM. By 2007, Stage 2 reached its peak. Stage 3 reached its full height by 2011, while Stage 4 peaked by 2014. The first three stages created a graph that exhibits the expected "S-curve" for technology diffusion, with inflection point being the peak diffusion rate. This forecast indicates that Stage 5 should peak by 2019 and Stage 6 by 2026. Although this forecast extends to the year 2035, no peak was readily observed for Stage 7. Overall, most hospitals will achieve Stages 5, 6, or 7 of EMRAM by 2020; however, a considerable number of hospitals will not achieve Stage 7 by 2035.

Conclusions: We forecasted the adoption of electronic health record capabilities from a paper-based environment (Stage 0) to an environment where only electronic information is used to document and direct care delivery (Stage 7). According to our forecasts, the majority of hospitals will not reach Stage 7 until 2035, absent major policy changes or leaps in technological capabilities. These results indicate that US hospitals are decades away from fully implementing sophisticated decision support applications and interoperability functionalities in electronic health records as defined by EMRAM's Stage 7.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2196/10458DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6104443PMC
August 2018

The Value of Unstructured Electronic Health Record Data in Geriatric Syndrome Case Identification.

J Am Geriatr Soc 2018 08 4;66(8):1499-1507. Epub 2018 Jul 4.

Center for Population Health Information Technology, Department of Health Policy and Management, Bloomberg School of Public Health.

Objectives: To examine the value of unstructured electronic health record (EHR) data (free-text notes) in identifying a set of geriatric syndromes.

Design: Retrospective analysis of unstructured EHR notes using a natural language processing (NLP) algorithm.

Setting: Large multispecialty group.

Participants: Older adults (N=18,341; average age 75.9, 58.9% female).

Measurements: We compared the number of geriatric syndrome cases identified using structured claims and structured and unstructured EHR data. We also calculated these rates using a population-level claims database as a reference and identified comparable epidemiological rates in peer-reviewed literature as a benchmark.

Results: Using insurance claims data resulted in a geriatric syndrome prevalence ranging from 0.03% for lack of social support to 8.3% for walking difficulty. Using structured EHR data resulted in similar prevalence rates, ranging from 0.03% for malnutrition to 7.85% for walking difficulty. Incorporating unstructured EHR notes, enabled by applying the NLP algorithm, identified considerably higher rates of geriatric syndromes: absence of fecal control (2.1%, 2.3 times as much as structured claims and EHR data combined), decubitus ulcer (1.4%, 1.7 times as much), dementia (6.7%, 1.5 times as much), falls (23.6%, 3.2 times as much), malnutrition (2.5%, 18.0 times as much), lack of social support (29.8%, 455.9 times as much), urinary retention (4.2%, 3.9 times as much), vision impairment (6.2%, 7.4 times as much), weight loss (19.2%, 2.9 as much), and walking difficulty (36.34%, 3.4 as much). The geriatric syndrome rates extracted from structured data were substantially lower than published epidemiological rates, although adding the NLP results considerably closed this gap.

Conclusion: Claims and structured EHR data give an incomplete picture of burden related to geriatric syndromes. Geriatric syndromes are likely to be missed if unstructured data are not analyzed. Pragmatic NLP algorithms can assist with identifying individuals at high risk of experiencing geriatric syndromes and improving coordination of care for older adults.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/jgs.15411DOI Listing
August 2018

Assessing markers from ambulatory laboratory tests for predicting high-risk patients.

Am J Manag Care 2018 06 1;24(6):e190-e195. Epub 2018 Jun 1.

Center for Population Health Information Technology, The Johns Hopkins University Bloomberg School of Public Health, 624 North Broadway, Rm 601, Baltimore, MD 21205. Email:

Objectives: This exploratory study used outpatient laboratory test results from electronic health records (EHRs) for patient risk assessment and evaluated whether risk markers based on laboratory results improve the performance of diagnosis- and pharmacy-based predictive models for healthcare outcomes.

Study Design: Observational study of a patient cohort over 2 years.

Methods: We used administrative claims and EHR data over a 2-year period for a population of continuously insured patients in an integrated health system who had at least 1 ambulatory visit during the first year. We performed regression tree analyses to develop risk markers from frequently ordered outpatient laboratory tests. We added these risk markers to demographic and Charlson Comorbidity Index models and 3 models from the Johns Hopkins Adjusted Clinical Groups system to predict individual cost, inpatient admission, and high-cost patients. We evaluated the predictive and discriminatory performance of 5 lab-enhanced models.

Results: Our study population included 120,844 patients. Adding laboratory markers to base models improved R2 predictions of costs by 0.1% to 3.7%, identification of high-cost patients by 3.4% to 121%, and identification of patients with inpatient admissions by 1.0% to 188% for the demographic model. The addition of laboratory risk markers to comprehensive risk models, compared with simpler models, resulted in smaller improvements in predictive power.

Conclusions: The addition of laboratory risk markers can significantly improve the identification of high-risk patients using models that include age, gender, and a limited number of morbidities; however, models that use comprehensive risk measures may be only marginally improved.
View Article and Find Full Text PDF

Download full-text PDF

Source
June 2018

Healthcare costs and utilization associated with high-risk prescription opioid use: a retrospective cohort study.

BMC Med 2018 05 16;16(1):69. Epub 2018 May 16.

Center for Drug Safety and Effectiveness, Johns Hopkins University, Baltimore, MD, USA.

Background: Previous studies on high-risk opioid use have only focused on patients diagnosed with an opioid disorder. This study evaluates the impact of various high-risk prescription opioid use groups on healthcare costs and utilization.

Methods: This is a retrospective cohort study using QuintilesIMS health plan claims with independent variables from 2012 and outcomes from 2013. We included a population-based sample of 191,405 non-elderly adults with known sex, one or more opioid prescriptions, and continuous enrollment in 2012 and 2013. Three high-risk opioid use groups were identified in 2012 as (1) persons with 100+ morphine milligram equivalents per day for 90+ consecutive days (chronic users); (2) persons with 30+ days of concomitant opioid and benzodiazepine use (concomitant users); and (3) individuals diagnosed with an opioid use disorder. The length of time that a person had been characterized as a high-risk user was measured. Three healthcare costs (total, medical, and pharmacy costs) and four binary utilization indicators (the top 5% total cost users, the top 5% pharmacy cost users, any hospitalization, and any emergency department visit) derived from 2013 were outcomes. We applied a generalized linear model (GLM) with a log-link function and gamma distribution for costs while logistic regression was employed for utilization indicators. We also adopted propensity score weighting to control for the baseline differences between high-risk and non-high-risk opioid users.

Results: Of individuals with one or more opioid prescription, 1.45% were chronic users, 4.81% were concomitant users, and 0.94% were diagnosed as having an opioid use disorder. After adjustment and propensity score weighting, chronic users had statistically significant higher prospective total (40%), medical (3%), and pharmacy (172%) costs. The increases in total, medical, and pharmacy costs associated with concomitant users were 13%, 7%, and 41%, and 28%, 21% and 63% for users with a diagnosed opioid use disorder. Both total and pharmacy costs increased with the length of time characterized as high-risk users, with the increase being statistically significant. Only concomitant users were associated with a higher odds of hospitalization or emergency department use.

Conclusions: Individuals with high-risk prescription opioid use have significantly higher healthcare costs and utilization than their counterparts, especially those with chronic high-dose opioid use.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s12916-018-1058-yDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5954462PMC
May 2018

Factors associated with physicians' prescriptions for rheumatoid arthritis drugs not filled by patients.

Arthritis Res Ther 2018 05 2;20(1):79. Epub 2018 May 2.

Division of Clinical Immunology and Rheumatology, University of Alabama at Birmingham, 510 20th Street South, Birmingham, AL, 35294, USA.

Background: This study estimated the extent and predictors of primary nonadherence (i.e., prescriptions made by physicians but not initiated by patients) to methotrexate and to biologics or tofacitinib in rheumatoid arthritis (RA) patients who were newly prescribed these medications.

Methods: Using administrative claims linked with electronic health records (EHRs) from multiple healthcare provider organizations in the USA, RA patients who received a new prescription for methotrexate or biologics/tofacitinib were identified from EHRs. Claims data were used to ascertain filling or administration status. A logistic regression model for predicting primary nonadherence was developed and tested in training and test samples. Predictors were selected based on clinical judgment and LASSO logistic regression.

Results: A total of 36.8% of patients newly prescribed methotrexate failed to initiate methotrexate within 2 months; 40.6% of patients newly prescribed biologics/tofacitinib failed to initiate within 3 months. Factors associated with methotrexate primary nonadherence included age, race, region, body mass index, count of active drug ingredients, and certain previously diagnosed and treated conditions at baseline. Factors associated with biologics/tofacitinib primary nonadherence included age, insurance, and certain previously treated conditions at baseline. The area under the receiver operating characteristic curve of the logistic regression model estimated in the training sample and applied to the independent test sample was 0.86 and 0.78 for predicting primary nonadherence to methotrexate and to biologics/tofacitinib, respectively.

Conclusions: This study confirmed that failure to initiate new prescriptions for methotrexate and biologics/tofacitinib was common in RA patients. It is feasible to predict patients at high risk of primary nonadherence to methotrexate and to biologics/tofacitinib and to target such patients for early interventions to promote adherence.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s13075-018-1580-5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5932861PMC
May 2018

Methods for Evaluating Natural Experiments in Obesity: A Systematic Review.

Ann Intern Med 2018 06 1;168(11):791-800. Epub 2018 May 1.

Johns Hopkins University School of Medicine and Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland (W.L.B., R.F.W., A.Z., E.T., E.A.K., H.K., E.A.S., O.S., E.B.B., L.J.C.).

Background: Given the obesity pandemic, rigorous methodological approaches, including natural experiments, are needed.

Purpose: To identify studies that report effects of programs, policies, or built environment changes on obesity prevention and control and to describe their methods.

Data Sources: PubMed, CINAHL, PsycINFO, and EconLit (January 2000 to August 2017).

Study Selection: Natural experiments and experimental studies evaluating a program, policy, or built environment change in U.S. or non-U.S. populations by using measures of obesity or obesity-related health behaviors.

Data Extraction: 2 reviewers serially extracted data on study design, population characteristics, data sources and linkages, measures, and analytic methods and independently evaluated risk of bias.

Data Synthesis: 294 studies (188 U.S., 106 non-U.S.) were identified, including 156 natural experiments (53%), 118 experimental studies (40%), and 20 (7%) with unclear study design. Studies used 106 (71 U.S., 35 non-U.S.) data systems; 37% of the U.S. data systems were linked to another data source. For outcomes, 112 studies reported childhood weight and 32 adult weight; 152 had physical activity and 148 had dietary measures. For analysis, natural experiments most commonly used cross-sectional comparisons of exposed and unexposed groups (n = 55 [35%]). Most natural experiments had a high risk of bias, and 63% had weak handling of withdrawals and dropouts.

Limitation: Outcomes restricted to obesity measures and health behaviors; inconsistent or unclear descriptions of natural experiment designs; and imperfect methods for assessing risk of bias in natural experiments.

Conclusion: Many methodologically diverse natural experiments and experimental studies were identified that reported effects of U.S. and non-U.S. programs, policies, or built environment changes on obesity prevention and control. The findings reinforce the need for methodological and analytic advances that would strengthen evaluations of obesity prevention and control initiatives.

Primary Funding Source: National Institutes of Health, Office of Disease Prevention, and Agency for Healthcare Research and Quality. (PROSPERO: CRD42017055750).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.7326/M18-0309DOI Listing
June 2018

Defining and Assessing Geriatric Risk Factors and Associated Health Care Utilization Among Older Adults Using Claims and Electronic Health Records.

Med Care 2018 03;56(3):233-239

Department of Health Policy and Management, Center for Population Health IT, Johns Hopkins Bloomberg School of Public Health.

Background: Using electronic health records (EHRs), in addition to claims, to systematically identify patients with factors associated with adverse outcomes (geriatric risk) among older adults can prove beneficial for population health management and clinical service delivery.

Objective: To define and compare geriatric risk factors derivable from claims, structured EHRs, and unstructured EHRs, and estimate the relationship between geriatric risk factors and health care utilization.

Research Design: We performed a retrospective cohort study of patients enrolled in a Medicare Advantage plan from 2011 to 2013 using both administrative claims and EHRs. We defined 10 individual geriatric risk factors and a summary geriatric risk index based on diagnosed conditions and pattern matching techniques applied to EHR free text. The prevalence of geriatric risk factors was estimated using claims, structured EHRs, and structured and unstructured EHRs combined. The association of geriatric risk index with any occurrence of hospitalizations, emergency department visits, and nursing home visits were estimated using logistic regression adjusted for demographic and comorbidity covariates.

Results: The prevalence of geriatric risk factors increased after adding unstructured EHR data to structured EHRs, compared with those derived from structured EHRs alone and claims alone. On the basis of claims, structured EHRs, and structured and unstructured EHRs combined, 12.9%, 15.0%, and 24.6% of the patients had 1 geriatric risk factor, respectively; 3.9%, 4.2%, and 15.8% had ≥2 geriatric risk factors, respectively. Statistically significant association between geriatric risk index and health care utilization was found independent of demographic and comorbidity covariates. For example, based on claims, estimated odds ratios for having 1 and ≥2 geriatric risk factors in year 1 were 1.49 (P<0.001) and 2.62 (P<0.001) in predicting any occurrence of hospitalizations in year 1, and 1.32 (P<0.001) and 1.34 (P=0.003) in predicting any occurrence of hospitalizations in year 2.

Conclusions: The results demonstrate the feasibility and potential of using EHRs and claims for collecting new types of geriatric risk information that could augment the more commonly collected disease information to identify and move upstream the management of high-risk cases among older patients.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/MLR.0000000000000865DOI Listing
March 2018

A State-wide Health IT Infrastructure for Population Health: Building a Community-wide Electronic Platform for Maryland's All-Payer Global Budget.

Online J Public Health Inform 2017 31;9(3):e195. Epub 2017 Dec 31.

Center for Population Health IT, Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD.

Maryland Department of Health (MDH) has been preparing for alignment of its population health initiatives with Maryland's unique All-Payer hospital global budget program. In order to operationalize population health initiatives, it is required to identify a starter set of measures addressing community level health interventions and to collect interoperable data for those measures. The broad adoption of electronic health records (EHRs) with ongoing data collection on almost all patients in the state, combined with hospital participation in health information exchange (HIE) initiatives, provides an unprecedented opportunity for near real-time assessment of the health of the communities. MDH's EHR-based monitoring complements, and perhaps replaces, ad-hoc assessments based on limited surveys, billing, and other administrative data. This article explores the potential expansion of health IT capacity as a method to improve population health across Maryland. First, we propose a progression plan for four selected community-wide population health measures: body mass index, blood pressure, smoking status, and falls-related injuries. We then present an assessment of the current and near real-time availability of digital data in Maryland including the geographic granularity on which each measure can be assessed statewide. Finally, we provide general recommendations to improve interoperable data collection for selected measures over time via the Maryland HIE. This paper is intended to serve as a high level guiding framework for communities across the US that are undergoing healthcare transformation toward integrated models of care using universal interoperable EHRs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.5210/ojphi.v9i3.8129DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5790428PMC
December 2017

Helping Older Adults Improve Their Medication Experience (HOME) by Addressing Medication Regimen Complexity in Home Healthcare.

Home Healthc Now 2018 Jan/Feb;36(1):10-19

Orla C. Sheehan, MD, PhD, is an Assistant Professor, Division of Geriatric Medicine and Gerontology, Center on Aging and Health, Johns Hopkins University, Baltimore, Maryland. Hadi Kharrazi, MHI, MD, PhD, is an Assistant Professor, Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland. Kimberly J. Carl, RN, is Director, Home Health Services, Johns Hopkins Home Care Group, Baltimore, Maryland. Bruce Leff, MD, is a Professor, Division of Geriatric Medicine and Gerontology, Center for Transformative Geriatric Research, Johns Hopkins University, Baltimore, Maryland. Jennifer L. Wolff, PhD, is a Professor, Health Policy and Management, School of Public Health, Johns Hopkins University, Baltimore, Maryland. David L. Roth, PhD, is a Professor, Center on Aging and Health, Division of Geriatrics and Gerontology, Johns Hopkins University, Baltimore, Maryland. Jennifer Gabbard, MD, is an Assistant Professor, Wake Forest University, Section of Gerontology and Geriatrics, Winston-Salem, North Carolina. Cynthia M. Boyd, MD, MPH, is a Professor, Division of Geriatric Medicine and Gerontology, Center for Transformative Geriatric Research, Johns Hopkins University, Baltimore, Maryland.

In skilled home healthcare (SHHC), communication between nurses and physicians is often inadequate for medication reconciliation and needed changes to the medication regimens are rarely made. Fragmentation of electronic health record (EHR) systems, transitions of care, lack of physician-nurse in-person contact, and poor understanding of medications by patients and their families put patients at risk for serious adverse outcomes. The aim of this study was to develop and test the HOME tool, an informatics tool to improve communication about medication regimens, share the insights of home care nurses with physicians, and highlight to physicians and nurses the complexity of medication schedules. We used human computer interaction design and evaluation principles, automated extraction from standardized forms, and modification of existing EHR fields to highlight key medication-related insights that had arisen during the SHHC visit. Separate versions of the tool were developed for physicians/nurses and patients/caregivers. A pilot of the tool was conducted using 20 SHHC encounters. Home care nurses and physicians found the tool useful for communication. Home care nurses were able to implement the HOME tool into their clinical workflow and reported improved communication with physicians about medications. This simple and largely automated tool improves understanding and communication around medications in SHHC.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/NHH.0000000000000632DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5761673PMC
August 2018

A State-of-the-Art Systematic Content Analysis of Games for Health.

Games Health J 2018 Feb 2;7(1):1-15. Epub 2018 Jan 2.

2 Department of Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University , Baltimore, Maryland.

As the field of games for health continues to gain momentum, it is crucial to document the field's scale of growth, identify design patterns, and to address potential design issues for future health game development. Few studies have explored the attributes and usability features of games for health as a whole over time. We offer the first comprehensive systematic content analysis of digital games for health by examining 1743 health games released between 1983 and 2016 in 23 countries extracted from nine international English health game databases and directories. The majority of these games were developed in the United States (67.18%) and France (18.59%). The most popular platforms included web browsers (72.38%) and Windows (14.41%). Approximately four out of five (79.12%) of the games were available at no cost. We coded 1553 accessible games for an in-depth analysis and further assessed 1303 for usability. Popular health topics represented included: cognitive training (37.41%), indirect health education (13.33%), and medical care provision (9.98%). Most games (75.66%) could be completed within 60 minutes. The main game usability problems identified included a lack of customization, nonskippable contents, and a lack of feedback and instruction to the players. While most of the usability problems have improved as did the software and hardware technology, the players' ability to skip nonplayable contents has become slightly more restricted overtime. Comparison with game efficacy publications suggested that a further understanding of the scope for games for health is needed on a global level.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1089/g4h.2017.0095DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5797326PMC
February 2018

A Practical Comparison Between the Predictive Power of Population-based Risk Stratification Models Using Data From Electronic Health Records Versus Administrative Claims: Setting a Baseline for Future EHR-derived Risk Stratification Models.

Med Care 2018 02;56(2):202-203

Department of Health Policy and Management, Center for Population Health Information Technology, Johns Hopkins School of Public Health, Baltimore, MD.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/MLR.0000000000000849DOI Listing
February 2018

Comparing clinician descriptions of frailty and geriatric syndromes using electronic health records: a retrospective cohort study.

BMC Geriatr 2017 10 25;17(1):248. Epub 2017 Oct 25.

Center for Population Health IT, Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA.

Background: Geriatric syndromes, including frailty, are common in older adults and associated with adverse outcomes. We compared patients described in clinical notes as "frail" to other older adults with respect to geriatric syndrome burden and healthcare utilization.

Methods: We conducted a retrospective cohort study on 18,341 Medicare Advantage enrollees aged 65+ (members of a large nonprofit medical group in Massachusetts), analyzing up to three years of administrative claims and structured and unstructured electronic health record (EHR) data. We determined the presence of ten geriatric syndromes (falls, malnutrition, dementia, severe urinary control issues, absence of fecal control, visual impairment, walking difficulty, pressure ulcers, lack of social support, and weight loss) from claims and EHR data, and the presence of frailty descriptions in clinical notes with a pattern-matching natural language processing (NLP) algorithm.

Results: Of the 18,341 patients, we found that 2202 (12%) were described as "frail" in clinical notes. "Frail" patients were older (82.3 ± 6.8 vs 75.9 ± 5.9, p < .001) and had higher rates of healthcare utilization, including number of inpatient hospitalizations and emergency department visits, than the rest of the population (p < .001). "Frail" patients had on average 4.85 ± 1.72 of the ten geriatric syndromes studied, while non-frail patients had 2.35 ± 1.71 (p = .013). Falls, walking difficulty, malnutrition, weight loss, lack of social support and dementia were more highly correlated with frailty descriptions. The most common geriatric syndrome pattern among "frail" patients was a combination of walking difficulty, lack of social support, falls, and weight loss.

Conclusions: Patients identified as "frail" by providers in clinical notes have higher rates of healthcare utilization and more geriatric syndromes than other patients. Certain geriatric syndromes were more highly correlated with descriptions of frailty than others.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s12877-017-0645-7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5657074PMC
October 2017

Electronic medical record reminders and smoking cessation activities in primary care.

Addict Behav 2018 Feb 16;77:203-209. Epub 2017 Oct 16.

Department of Family Medicine, College of Medicine, The Ohio State University, 2231 North High Street, 265 Northwood and High Building, Columbus, OH 43201, United States. Electronic address:

Purpose: The purpose of this paper is to assess electronic medical record (EMR) automatic reminder use in relation to smoking cessation activities among primary-care providers.

Background: Primary-care physicians are in the frontline of efforts to promote smoking cessation. Moreover, doctors' prescribing privileges give them additional tools to help patients successfully quit smoking. New EMR functions can provide automated reminders for physicians to counsel smokers and provide prescriptions to support quit attempts.

Sample And Methods: Logit regression is used to analyze the 2012 National Ambulatory Medical Care Survey (NAMCS). Variables related to the EMR's clinical reminder capability, patient's smoking status, the provision of cessation counseling and the prescribing of drugs that support quitting are analyzed.

Results: For primary care visit documents, smoking status was recorded 77.7% of the time. Smoking cessation counseling was ordered/provided 16.4% of the time in physicians' offices using electronic reminders routinely compared to 9.1% in those lacking the functionality. Smoking cessation medication was ordered/prescribed for 3.7% of current smokers when reminders were routinely used versus 2.1% when no reminder was used. All the differences were statistically significant.

Conclusions: The presence of an EMR equipped with automated clinical reminders is a valuable resource in efforts to promote smoking cessation. Insurers, regulators, and organizations promulgating clinical guidelines should include the use of EMR technology as part of their programs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.addbeh.2017.10.009DOI Listing
February 2018

Evaluating the Impact of Prescription Fill Rates on Risk Stratification Model Performance.

Med Care 2017 12;55(12):1052-1060

*Department of Epidemiology, Johns Hopkins School of Public Health, Center for Drug Safety and Effectiveness†Department of Health Policy and Management, Johns Hopkins School of Public Health, Center for Population Health Information Technology‡Johns Hopkins Hospital, Center for Medication Quality and Outcomes at Johns Hopkins Hospital§Department of Epidemiology, Johns Hopkins School of Public Health, Center for Drug Safety and Effectiveness∥Department of Pharmacy, Johns Hopkins Bayview Medical Center, Baltimore, MD.

Background: Risk adjustment models are traditionally derived from administrative claims. Prescription fill rates-extracted by comparing electronic health record prescriptions and pharmacy claims fills-represent a novel measure of medication adherence and may improve the performance of risk adjustment models.

Objective: We evaluated the impact of prescription fill rates on claims-based risk adjustment models in predicting both concurrent and prospective costs and utilization.

Methods: We conducted a retrospective cohort study of 43,097 primary care patients from HealthPartners network between 2011 and 2012. Diagnosis and/or pharmacy claims of 2011 were used to build 3 base models using the Johns Hopkins ACG system, in addition to demographics. Model performances were compared before and after adding 3 types of prescription fill rates: primary 0-7 days, primary 0-30 days, and overall. Overall fill rates utilized all ordered prescriptions from electronic health record while primary fill rates excluded refill orders.

Results: The overall, primary 0-7, and 0-30 days fill rates were 72.30%, 59.82%, and 67.33%. The fill rates were similar between sexes but varied across different medication classifications, whereas the youngest had the highest rate. Adding fill rates modestly improved the performance of all models in explaining medical costs (improving concurrent R by 1.15% to 2.07%), followed by total costs (0.58% to 1.43%), and pharmacy costs (0.07% to 0.65%). The impact was greater for concurrent costs compared with prospective costs. Base models without diagnosis information showed the highest improvement using prescription fill rates.

Conclusions: Prescription fill rates can modestly enhance claims-based risk prediction models; however, population-level improvements in predicting utilization are limited.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/MLR.0000000000000825DOI Listing
December 2017