Publications by authors named "Clyde B Schechter"

126 Publications

The development and validation of a resource consumption score of an emergency department consultation.

PLoS One 2021 19;16(2):e0247244. Epub 2021 Feb 19.

Department of Emergency Medicine, Inselspital, University Hospital, University of Bern, Bern, Switzerland.

Background: Emergency Department (ED) visits and health care costs are increasing globally, but little is known about contributing factors of ED resource consumption. This study aims to analyse and to predict the total ED resource consumption out of the patient and consultation characteristics in order to execute performance analysis and evaluate quality improvements.

Methods: Characteristics of ED visits of a large Swiss university hospital were summarized according to acute patient condition factors (e.g. chief complaint, resuscitation bay use, vital parameter deviations), chronic patient conditions (e.g. age, comorbidities, drug intake), and contextual factors (e.g. night-time admission). Univariable and multivariable linear regression analyses were conducted with the total ED resource consumption as the dependent variable.

Results: In total, 164,729 visits were included in the analysis. Physician resources accounted for the largest proportion (54.8%), followed by radiology (19.2%), and laboratory work-up (16.2%). In the multivariable final model, chief complaint had the highest impact on the total ED resource consumption, followed by resuscitation bay use and admission by ambulance. The impact of age group was small. The multivariable final model was validated (R2 of 0.54) and a scoring system was derived out of the predictors.

Conclusions: More than half of the variation in total ED resource consumption can be predicted by our suggested model in the internal validation, but further studies are needed for external validation. The score developed can be used to calculate benchmarks of an ED and provides leaders in emergency care with a tool that allows them to evaluate resource decisions and to estimate effects of organizational changes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0247244PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7894944PMC
February 2021

Trade-Offs Between Harms and Benefits of Different Breast Cancer Screening Intervals Among Low-Risk Women.

J Natl Cancer Inst 2021 Jan 30. Epub 2021 Jan 30.

Norris Cotton Cancer Center and the Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine at Dartmouth, Lebanon, NH, USA.

Background: A paucity of research addresses breast cancer screening strategies for women at lower-than-average breast cancer risk. The aim of this study was to examine screening harms and benefits among women aged 50-74 years at lower-than-average breast cancer risk by breast density.

Methods: Three well-established, validated Cancer Intervention and Surveillance Network models were used to estimate the lifetime benefits and harms of different screening scenarios, varying by screening interval (biennial, triennial). Breast cancer deaths averted, life-years and quality-adjusted life-years gained, false-positives, benign biopsies, and overdiagnosis were assessed by relative risk (RR) level (0.6, 0.7, 0.85, 1 [average risk]) and breast density category, for US women born in 1970.

Results: Screening benefits decreased proportionally with decreasing risk and with lower breast density. False-positives, unnecessary biopsies, and the percentage overdiagnosis also varied substantially by breast density category; false-positives and unnecessary biopsies were highest in the heterogeneously dense category. For women with fatty or scattered fibroglandular breast density and a relative risk of no more than 0.85, the additional deaths averted and life-years gained were small with biennial vs triennial screening. For these groups, undergoing 4 additional screens (screening biennially [13 screens] vs triennially [9 screens]) averted no more than 1 additional breast cancer death and gained no more than 16 life-years and no more than 10 quality-adjusted life-years per 1000 women but resulted in up to 232 more false-positives per 1000 women.

Conclusion: Triennial screening from age 50 to 74 years may be a reasonable screening strategy for women with lower-than-average breast cancer risk and fatty or scattered fibroglandular breast density.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jnci/djaa218DOI Listing
January 2021

Individualized quality of life benefit and cost-effectiveness estimates of proton therapy for patients with oropharyngeal cancer.

Radiat Oncol 2021 Jan 21;16(1):19. Epub 2021 Jan 21.

Institute for Onco-Physics, Albert Einstein College of Medicine, Bronx, NY, 10461, USA.

Background: Proton therapy is a promising advancement in radiation oncology especially in terms of reducing normal tissue toxicity, although it is currently expensive and of limited availability. Here we estimated the individual quality of life benefit and cost-effectiveness of proton therapy in patients with oropharyngeal cancer treated with definitive radiation therapy (RT), as a decision-making tool for treatment individualization.

Methods And Materials: Normal tissue complication probability models were used to estimate the risk of dysphagia, esophagitis, hypothyroidism, xerostomia and oral mucositis for 33 patients, comparing delivered photon intensity-modulated RT (IMRT) plans to intensity-modulated proton therapy (IMPT) plans. Quality-adjusted life years (QALYs) lost were calculated for each complication while accounting for patient-specific conditional survival probability and assigning quality-adjustment factors based on complication severity. Cost-effectiveness was modeled based on upfront costs of IMPT and IMRT, and the cost of acute and/or long-term management of treatment complications. Uncertainties in all model parameters and sensitivity analyses were included through Monte Carlo sampling.

Results: The incremental cost-effectiveness ratios (ICERs) showed considerable variability in the cost of QALYs spared between patients, with median $361,405/QALY for all patients, varying from $54,477/QALY to $1,508,845/QALY between individual patients. Proton therapy was more likely to be cost-effective for patients with p16-positive tumors ($234,201/QALY), compared to p16-negative tumors ($516,297/QALY). For patients with p16-positive tumors treated with comprehensive nodal irradiation, proton therapy is estimated to be cost-effective in ≥ 50% of sampled cases for 8/9 patients at $500,000/QALY, compared to 6/24 patients who either have p16-negative tumors or receive unilateral neck irradiation.

Conclusions: Proton therapy cost-effectiveness varies greatly among oropharyngeal cancer patients, and highlights the importance of individualized decision-making. Although the upfront cost, societal willingness to pay and healthcare administration can vary greatly among different countries, identifying patients for whom proton therapy will have the greatest benefit can optimize resource allocation and inform prospective clinical trial design.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s13014-021-01745-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7819210PMC
January 2021

Acculturation, coping, and PTSD in Hispanic 9/11 rescue and recovery workers.

Psychol Trauma 2021 Jan;13(1):84-93

Department of Psychiatry, Icahn School of Medicine at Mount Sinai.

Research examining the responders of the World Trade Center terrorist attacks of 9/11 has found that Hispanic responders are at greater risk for posttraumatic stress disorder (PTSD) than non-Hispanic White responders. However, no studies have examined how acculturation may influence the relationship between coping and PTSD in Hispanic 9/11 responders. This novel study is the first to examine differences in coping and PTSD among Hispanic responders by level of acculturation. The sample is composed of 845 Hispanic 9/11 responders who were seen at the World Trade Center Health Program and participated in a web-based survey. Using logistic and multiple linear regression, we examined how acculturation is related to their coping strategies and risk for PTSD. We also tested for interaction to examine whether level of acculturation moderated the relationship between coping and PTSD symptom severity. Key findings revealed that higher acculturation is associated with the use of substances, venting, and humor to cope, while lower acculturation is associated with the use of active coping and self-distraction in this sample. We also found that less acculturated responders were more likely to experience more severe PTSD. Lastly, our findings revealed that Hispanics who are more acculturated and used substances to cope had more severe PTSD than less acculturated responders. These findings highlight the need to consider the role of acculturation in Hispanic responders' coping and PTSD. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1037/tra0000624DOI Listing
January 2021

Mental health stigma and barriers to care in World Trade Center responders: Results from a large, population-based health monitoring cohort.

Am J Ind Med 2021 03 25;64(3):208-216. Epub 2020 Nov 25.

Department of Psychiatry, Yale School of Medicine, New Haven, Connecticut, USA.

Background: Nearly 20 years after the terrorist attacks of September 11, 2001, multiple studies have documented the adverse mental consequences among World Trade Center (WTC) rescue, recovery, and clean-up workers. However, scarce research has examined mental health stigma and barriers to care in WTC-exposed individuals, and no known study has examined whether rates of endorsement may differ between police and "nontraditional" responders, the latter comprising a heterogeneous group of workers and volunteers.

Objective: To identify the prevalence and correlates of mental health stigma and barriers to care in WTC responders.

Methods: Mental health stigma and barriers to care and their correlates were examined in 6,777 police and 6,272 nontraditional WTC responders.

Results: Nontraditional responders endorsed more stigma or barriers to care concerns than police responders. Within a subsample who screened positive for a psychiatric disorder, police were more likely than nontraditional responders to endorse "concerns that negative job consequences might result" (17.9% vs. 9.1%), while nontraditional responders were more likely to endorse "I don't know where to go to find counseling services" (18.4% vs.6.6%). Within this subsample, mental health service need and more severe WTC-related posttraumatic stress disorder symptoms were associated with increased likelihood of endorsing stigma or barriers; pre-9/11 psychiatric history and non-Hispanic Black race/ethnicity were associated with lower likelihood of endorsing stigma or barriers.

Conclusions: Results of this study underscore the burden of mental health stigma and barriers to care in WTC responders, and highlight the need for targeted interventions to address these concerns and promote mental healthcare utilization in this population.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/ajim.23204DOI Listing
March 2021

Forced expiratory time: a composite of airway narrowing and airway closure.

J Appl Physiol (1985) 2021 Jan 22;130(1):80-86. Epub 2020 Oct 22.

Department of Medicine, Larner College of Medicine, University of Vermont, Burlington, Vermont.

Forced expiratory time (FET) is a spirometrically derived variable thought to reflect lung function, but its physiological basis remains poorly understood. We developed a mathematical theory of FET assuming a linear forced expiratory flow-volume profile that terminates when expiratory flow falls below a defined detection threshold. FET is predicted to correlate negatively with both FEV and FVC if variations in the rate of lung emptying (relative to normal) among individuals in a population exceed variations in the amount of lung emptying. We retrospectively determined FET pre- and postmethacholine challenge in 1,241 patients (818 had normal lung function, 137 were obstructed, and 229 were restricted) and examined its relationships to spirometric and demographic variables in both hyperresponsive and normoresponsive individuals. Mean FET was 9.6 ± 2.2 s in the normal group, 12.3 ± 3.0 s in those with obstruction, and 8.8 ± 1.9 s in those with restriction. FET was inversely related to FEV/FVC in all groups, negatively related to FEV in the obstructed patients, and positively related to FVC in both the normal and restricted patients. There was no relationship with methacholine responsiveness. Overall, our theory of the relationship between FET to the spirometric indices is supported by these findings and potentially explains how FET is affected by sex, age, smoking status, and possibly body mass index. Forced expiratory time (FET) has long been felt to reflect important physiological information about lung function but exactly how has never been clear. Here, we use a model analysis to assess the contributions of airway narrowing versus airway closure to FET in a population of individuals and find support for the theory that FET correlates positively with FEV if the amounts of lung emptying over a forced expiration vary from predicted values more than variations in the rates of lung emptying, whereas the correlation is negative in the opposite case.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1152/japplphysiol.00556.2020DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7944929PMC
January 2021

Design and methods of NYC care calls: An effectiveness trial of telephone-delivered type 2 diabetes self-management support.

Contemp Clin Trials 2020 Nov 3;98:106166. Epub 2020 Oct 3.

New York City Department of Health and Mental Hygiene, Long Island City, NY, USA.

Although problems with type 2 diabetes (T2D) self-management and treatment adherence often co-occur with emotional distress, few translatable intervention approaches are available that can target these related problems in primary care practice settings. The New York City (NYC) Care Calls study is a randomized controlled trial that tests the effectiveness of structured support for diabetes self-management and distress management, delivered via telephone by health educators, in improving glycemic control, self-management and emotional well-being among predominantly ethnic minority and socioeconomically disadvantaged adults with suboptimally controlled T2D. English- and Spanish-speaking adults treated for T2D in NYC primary care practices were recruited based on having an A1C ≥ 7.5% despite being prescribed medications for diabetes. Participants (N = 812) were randomly assigned to a telephonic intervention condition with a stepped protocol of 6-12 phone calls over 1 year, delivered by a health educator, or to a comparison condition of enhanced usual care. The primary outcome is change in A1C over one year, measured at baseline and again approximately 6- and 12-months later. Secondary outcomes measured on the same schedule include blood pressure, patient-reported emotional distress, treatment adherence and self-management behaviors. A comprehensive effectiveness evaluation is guided by the RE-AIM framework (Reach, Effectiveness, Adoption, Implementation, Maintenance) to gather data that can inform dissemination and implementation of the intervention, if successful. This paper describes the study rationale, trial design, and methodology.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cct.2020.106166DOI Listing
November 2020

Distress and Type 2 Diabetes Self-Care: Putting the Pieces Together.

Ann Behav Med 2020 Sep 11. Epub 2020 Sep 11.

Ferkauf Graduate School of Psychology, Yeshiva University, Bronx, NY, USA.

Background: Conflicting research emphasizes depression, diabetes distress, or well-being in relation to diabetes self-care and risk for poor health outcomes.

Purpose: The purpose of this study was to test whether a latent variable for general psychological distress derived from shared variance of depression symptoms, diabetes distress, and well-being predicts a latent variable of diabetes self-care and to examine evidence for unique effects once shared effects are adjusted for.

Methods: Adults with suboptimally controlled diabetes were recruited from the South Bronx, NY, for a telephonic diabetes self-management support trial. Baseline diabetes self-care, medication adherence, depression symptoms, diabetes distress, and well-being were measured by validated self-report. Structural equation modeling specified a latent variable for general psychological distress derived from shared variance of depression symptoms, diabetes distress, and well-being. Diabetes self-care was a latent variable indicated by diet, glucose self-monitoring, and medication adherence.

Results: Participants (N = 627, 65% female) were predominantly ethnic minority (70% Hispanic; 45% Black) and 77% reported household income <$20K/year. Mean (standard deviation) age = 56 (12) years; A1c = 9.1% (1.9%); body mass index = 32 (8) kg/m2. The latent variable for psychological distress was a robust predictor of poorer diabetes self-care (coefficient = -0.59 [confidence interval = -0.71, -0.46], p < .001) with good model fit. Unique paths from depression symptoms, diabetes distress, and well-being (all ps > .99) to self-care were not observed.

Conclusions: In this population of disadvantaged adults with suboptimally controlled diabetes, general psychological distress was strongly associated with poorer diabetes self-care and fully accounted for the effects of depression, diabetes distress, and positive well-being. This suggests that general distress may underlie previously reported associations between these constructs and diabetes self-care.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/abm/kaaa070DOI Listing
September 2020

Personalizing Breast Cancer Screening Based on Polygenic Risk and Family History.

J Natl Cancer Inst 2021 Apr;113(4):434-442

Department of Public Health, Erasmus University Medical Center, Rotterdam, the Netherlands.

Background: We assessed the clinical utility of a first-degree breast cancer family history and polygenic risk score (PRS) to inform screening decisions among women aged 30-50 years.

Methods: Two established breast cancer models evaluated digital mammography screening strategies in the 1985 US birth cohort by risk groups defined by family history and PRS based on 313 single nucleotide polymorphisms. Strategies varied in initiation age (30, 35, 40, 45, and 50 years) and interval (annual, hybrid, biennial, triennial). The benefits (breast cancer deaths averted, life-years gained) and harms (false-positive mammograms, overdiagnoses) were compared with those seen with 3 established screening guidelines.

Results: Women with a breast cancer family history who initiated biennial screening at age 40 years (vs 50 years) had a 36% (model range = 29%-40%) increase in life-years gained and 20% (model range = 16%-24%) more breast cancer deaths averted, but 21% (model range = 17%-23%) more overdiagnoses and 63% (model range = 62%-64%) more false positives. Screening tailored to PRS vs biennial screening from 50 to 74 years had smaller positive effects on life-years gained (20%) and breast cancer deaths averted (11%) but also smaller increases in overdiagnoses (10%) and false positives (26%). Combined use of family history and PRS vs biennial screening from 50 to 74 years had the greatest increase in life-years gained (29%) and breast cancer deaths averted (18%).

Conclusions: Our results suggest that breast cancer family history and PRS could guide screening decisions before age 50 years among women at increased risk for breast cancer but expected increases in overdiagnoses and false positives should be expected.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jnci/djaa127DOI Listing
April 2021

Change in an urban food environment within a single year: Considerations for food-environment research and community health.

Prev Med Rep 2020 Sep 22;19:101102. Epub 2020 Apr 22.

Department of Family and Social Medicine, Albert Einstein College of Medicine | Montefiore Health System, Bronx, NY, United States.

Past research on food-environment change has been limited in key ways: (1) considering only select storefront businesses; (2) presuming items sold based on businesses category; (3) describing change only in ecological terms; (4) considering multi-year intervals. The current study addressed past limitations by: (1) considering a full range of both storefront and non-storefront businesses; (2) focusing on items actually offered (both healthful and less-healthful varieties); (3) describing individual-business-level changes (openings, closings, changes in offerings); (4) evaluating changes within a single year. Using a longitudinal, matched-pair comparison of 119 street segments in the Bronx, NY (October 2016-August 2017), investigators assessed all businesses-food stores, restaurants, other storefront businesses (OSBs), street vendors-for healthful and less-healthful food/drink offerings. Changes were described for individual businesses, individual street segments, and for the area overall. Overall, the number (and percentage) of businesses offering any food/drink increased from 45 (41.7%) in 2016 to 49 (45.8%) in 2017; businesses newly opening or newly offering food/drink cumulatively exceeded those shutting down or ceasing food/drink sales. In 2016, OSBs (gyms, barber shops, laundromats, furniture stores, gas stations, etc.) together with street vendors represented 20.0% and 27.3% of businesses offering healthful and less-healthful items, respectively; in 2017, the percentages were 31.0% and 37.0%. While the number of businesses offering healthful items increased, the number offering less-healthful items likewise increased and remained greater. If change in a full range of food/drink availability is not appreciated: food-environment studies may generate erroneous conclusions; communities may misdirect resources to address food-access disparities; and community residents may have increasing, but unrecognized, opportunities for unhealthful consumption.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.pmedr.2020.101102DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7334403PMC
September 2020

Clinical Benefits, Harms, and Cost-Effectiveness of Breast Cancer Screening for Survivors of Childhood Cancer Treated With Chest Radiation : A Comparative Modeling Study.

Ann Intern Med 2020 09 7;173(5):331-341. Epub 2020 Jul 7.

Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, Massachusetts (N.K.S.).

Background: Surveillance with annual mammography and breast magnetic resonance imaging (MRI) is recommended for female survivors of childhood cancer treated with chest radiation, yet benefits, harms, and costs are uncertain.

Objective: To compare the benefits, harms, and cost-effectiveness of breast cancer screening strategies in childhood cancer survivors.

Design: Collaborative simulation modeling using 2 Cancer Intervention and Surveillance Modeling Network breast cancer models.

Data Sources: Childhood Cancer Survivor Study and published data.

Target Population: Women aged 20 years with a history of chest radiotherapy.

Time Horizon: Lifetime.

Perspective: Payer.

Intervention: Annual MRI with or without mammography, starting at age 25, 30, or 35 years.

Outcome Measures: Breast cancer deaths averted, false-positive screening results, benign biopsy results, and incremental cost-effectiveness ratios (ICERs).

Results Of Base-case Analysis: Lifetime breast cancer mortality risk without screening was 10% to 11% across models. Compared with no screening, starting at age 25 years, annual mammography with MRI averted the most deaths (56% to 71%) and annual MRI (without mammography) averted 56% to 62%. Both strategies had the most screening tests, false-positive screening results, and benign biopsy results. For an ICER threshold of less than $100 000 per quality-adjusted life-year gained, screening beginning at age 30 years was preferred.

Results Of Sensitivity Analysis: Assuming lower screening performance, the benefit of adding mammography to MRI increased in both models, although the conclusions about preferred starting age remained unchanged.

Limitation: Elevated breast cancer risk was based on survivors diagnosed with childhood cancer between 1970 and 1986.

Conclusion: Early initiation (at ages 25 to 30 years) of annual breast cancer screening with MRI, with or without mammography, might reduce breast cancer mortality by half or more in survivors of childhood cancer.

Primary Funding Source: American Cancer Society and National Institutes of Health.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.7326/M19-3481DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7510774PMC
September 2020

Utilizing Cultural and Ethnic Variables in Screening Models to Identify Individuals at High Risk for Gastric Cancer: A Pilot Study.

Cancer Prev Res (Phila) 2020 08 14;13(8):687-698. Epub 2020 May 14.

Department of Epidemiology & Population Health, Albert Einstein College of Medicine, New York, New York.

Identifying persons at high risk for gastric cancer is needed for targeted interventions for prevention and control in low-incidence regions. Combining ethnic/cultural factors with conventional gastric cancer risk factors may enhance identification of high-risk persons. Data from a prior case-control study (40 gastric cancer cases and 100 controls) were used. A "conventional model" using risk factors included in the Harvard Cancer Risk Index's gastric cancer module was compared with a "parsimonious model" created from the most predictive variables of the conventional model as well as ethnic/cultural and socioeconomic variables. Model probability cutoffs aimed to identify a cohort with at least 10 times the baseline risk using Bayes' Theorem applied to baseline U.S. gastric cancer incidence. The parsimonious model included age, U.S. generation, race, cultural food at ages 15-18 years, excessive salt, education, alcohol, and family history. This 11-item model enriched the baseline risk by 10-fold, at the 0.5 probability level cutoff, with an estimated sensitivity of 72% [95% confidence interval (CI), 64-80], specificity of 94% (95% CI, 90-97), and ability to identify a subcohort with gastric cancer prevalence of 128.5 per 100,000. The conventional model was only able to reach a risk level of 9.8 times baseline with a corresponding sensitivity of 31% (95% CI, 23-39) and specificity of 97% (95% CI, 94-99). Cultural and ethnic data may add important information to models for identifying U.S. individuals at high risk for gastric cancer, who then could be targeted for interventions to prevent and control gastric cancer. The findings of this pilot study remain to be validated in an external dataset.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1158/1940-6207.CAPR-19-0490DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7415580PMC
August 2020

Simulation Modeling to Extend Clinical Trials of Adjuvant Chemotherapy Guided by a 21-Gene Expression Assay in Early Breast Cancer.

JNCI Cancer Spectr 2019 Dec 10;3(4):pkz062. Epub 2019 Aug 10.

Department of Oncology, Georgetown University Medical Center and Cancer Prevention and Control Program, Georgetown-Lombardi Comprehensive Cancer Center, Washington, DC.

Purpose: The Trial Assigning Individualized Options for Treatment (TAILORx) found chemotherapy could be omitted in many women with hormone receptor-positive, -negative, node-negative breast cancer and 21-gene recurrence scores (RS) 11-25, but left unanswered questions. We used simulation modeling to fill these gaps.

Methods: We simulated women eligible for TAILORx using joint distributions of patient and tumor characteristics and RS from TAILORx data; treatment effects by RS from other trials; and competing mortality from the Surveillance, Epidemiology, and End Results program database. The model simulations replicated TAILORx design, and then tested treatment effects on 9-year distant recurrence-free survival (DRFS) in 14 new scenarios: eight subgroups defined by age (≤50 and >50 years) and 21-gene RS (11-25/16-25/16-20/21-25); six different RS cut points among women ages 18-75 years (16-25, 16-20, 21-25, 26-30, 26-100); and 20-year follow-up. Mean hazard ratios SD, and DRFS rates are reported from 1000 simulations.

Results: The simulation results closely replicated TAILORx findings, with 75% of simulated trials showing noninferiority for chemotherapy omission. There was a mean DRFS hazard ratio of 1.79 (0.94) for endocrine vs chemoendocrine therapy among women ages 50 years and younger with RS 16-25; the DFRS rates were 91.6% (0.04) for endocrine and 94.8% (0.01) for chemoendocrine therapy. When treatment was randomly assigned among women ages 18-75 years with RS 26-30, the mean DRFS hazard ratio for endocrine vs chemoendocrine therapy was 1.60 (0.83). The conclusions were unchanged at 20-year follow-up.

Conclusions: Our results confirmed a small benefit in chemotherapy among women aged 50 years and younger with RS 16-25. Simulation modeling is useful to extend clinical trials, indicate how uncertainty might affect results, and power decision tools to support broader practice discussions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jncics/pkz062DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7049983PMC
December 2019

Race and Ethnicity and Satisfaction With Communication in the Intensive Care Unit.

Am J Hosp Palliat Care 2020 Oct 2;37(10):823-829. Epub 2020 Apr 2.

Division of Critical Care, Department of Medicine, Albert Einstein College of Medicine, Bronx, NY, USA.

Purpose: Racial and ethnic minority patients receive poorer quality end-of-life (EoL) care compared with white patients. Differences in quality of communication (QOC) with clinicians may contribute to these disparities. We measured differences in satisfaction with communication in the intensive care unit (ICU) by race and ethnicity.

Materials And Methods: This is a cross-sectional survey of family members of patients in ICUs of an academic medical center serving a diverse urban population using The Family Satisfaction with the ICU (FS-ICU) and QOC scales.

Results: One hundred surveys were completed (18.8% white, non-Hispanic; 34.4% black, non-Hispanic; 31.3% Hispanic; 15.6% other race/ethnicity). Mean FS-ICU score was 84.2 (standard deviation [SD] 20.5) for white patients, 83.3 (SD 16.2) for black patients, 82.7 (SD 17.8) for Hispanic or Latino patients, and 80.9 (SD 18.8) for patients with other race/ethnicity (Kruskal-Wallis, = .92). Differences remained insignificant when controlling for patient and respondent characteristics. The QOC scale was not scored due to nonresponse levels on questions about EoL communication.

Conclusions: Uniformly high ratings may have been influenced by avoidance of EoL discussion. This study is inconclusive regarding whether QOC influences disparities in EoL care since quality of EoL communication was not captured.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/1049909120916126DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7716699PMC
October 2020

Healthful and less-healthful foods and drinks from storefront and non-storefront businesses: implications for 'food deserts', 'food swamps' and food-source disparities.

Public Health Nutr 2020 06 30;23(8):1428-1439. Epub 2020 Mar 30.

Department of Family and Social Medicine, Albert Einstein College of Medicine | Montefiore Health System, Bronx, NY, USA.

Objective: Conceptualisations of 'food deserts' (areas lacking healthful food/drink) and 'food swamps' (areas overwhelm by less-healthful fare) may be both inaccurate and incomplete. Our objective was to more accurately and completely characterise food/drink availability in urban areas.

Design: Cross-sectional assessment of select healthful and less-healthful food/drink offerings from storefront businesses (stores, restaurants) and non-storefront businesses (street vendors).

Setting: Two areas of New York City: the Bronx (higher-poverty, mostly minority) and the Upper East Side (UES; wealthier, predominantly white).

Participants: All businesses on 63 street segments in the Bronx (n 662) and on 46 street segments in the UES (n 330).

Results: Greater percentages of businesses offered any, any healthful, and only less-healthful food/drink in the Bronx (42·0 %, 37·5 %, 4·4 %, respectively) than in the UES (30 %, 27·9 %, 2·1 %, respectively). Differences were driven mostly by businesses (e.g. newsstands, gyms, laundromats) not primarily focused on selling food/drink - 'other storefront businesses' (OSBs). OSBs accounted for 36·0 % of all food/drink-offering businesses in the Bronx (more numerous than restaurants or so-called 'food stores') and 18·2 % in the UES (more numerous than 'food stores'). Differences also related to street vendors in both the Bronx and the UES. If street vendors and OSBs were not captured, the missed percentages of street segments offering food/drink would be 14·5 % in the Bronx and 21·9 % in the UES.

Conclusions: Of businesses offering food/drink in communities, OSBs and street vendors can represent substantial percentages. Focusing on only 'food stores' and restaurants may miss or mischaracterise 'food deserts', 'food swamps', and food/drink-source disparities between communities.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1017/S1368980019004427DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7196006PMC
June 2020

Impact of a Telephonic Intervention to Improve Diabetes Control on Health Care Utilization and Cost for Adults in South Bronx, New York.

Diabetes Care 2020 04 4;43(4):743-750. Epub 2020 Mar 4.

New York City Department of Health and Mental Hygiene, New York, NY.

Objective: Self-management education and support are essential for improved diabetes control. A 1-year randomized telephonic diabetes self-management intervention (Bronx A1C) among a predominantly Latino and African American population in New York City was found effective in improving blood glucose control. To further those findings, this current study assessed the intervention's impact in reducing health care utilization and costs over 4 years.

Research Design And Methods: We measured inpatient ( = 816) health care utilization for Bronx A1C participants using an administrative data set containing all hospital discharges for New York State from 2006 to 2014. Multilevel mixed modeling was used to assess changes in health care utilization and costs between the telephonic diabetes intervention (Tele/Pr) arm and print-only (PrO) control arm.

Results: During follow-up, excess relative reductions in all-cause hospitalizations for the Tele/Pr arm compared with PrO arm were statistically significant for odds of hospital use (odds ratio [OR] 0.89; 95% CI 0.82, 0.97; < 0.01), number of hospital stays (rate ratio [RR] 0.90; 95% CI 0.81, 0.99; = 0.04), and hospital costs (RR 0.90; 95% CI 0.84, 0.98; = 0.01). Reductions in hospital use and costs were even stronger for diabetes-related hospitalizations. These outcomes were not significantly related to changes observed in hemoglobin A during individuals' participation in the 1-year intervention.

Conclusions: These results indicate that the impact of the Bronx A1C intervention was not just on short-term improvements in glycemic control but also on long-term health care utilization. This finding is important because it suggests the benefits of the intervention were long-lasting with the potential to not only reduce hospitalizations but also to lower hospital-associated costs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2337/dc19-0954DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7085809PMC
April 2020

A Multicenter Randomized Controlled Trial of Intensive Group Therapy for Tobacco Treatment in HIV-Infected Cigarette Smokers.

J Acquir Immune Defic Syndr 2020 04;83(4):405-414

Departments of Epidemiology and Population Health.

Background: Tobacco use has emerged as the leading killer of persons living with HIV (PLWH) in the United States. Little is known about the efficacy of tobacco treatment strategies in PLWH.

Design: Randomized controlled trial comparing Positively Smoke Free (PSF), an intensive group therapy intervention targeting HIV-infected smokers, to brief advice to quit. All participants were offered a 12-week supply of nicotine patches.

Methods: A cohort of 450 PLWH smokers, recruited from HIV-care centers in the Bronx, New York, and Washington, DC, were randomized 1:1 into the PSF or brief advice to quit conditions. PSF is an 8-session program tailored to address the needs and concerns of HIV-infected smokers and delivered by a trained smoking cessation counselor and PLWH ex-smoker peer pair. The primary outcome was biochemically confirmed, 7-day point-prevalence abstinence at 6 months.

Results: In the intention to treat analysis, PSF condition subjects had nearly double the quit rate of controls, 13% vs. 6.6% [odds ratio = 2.10 (95% confidence interval = 1.10 to 4.14), P = 0.04], at 3 months, but no significant difference in abstinence was observed at 6 months. PSF participants exhibited lower nicotine dependence and higher self-efficacy to resist smoking temptations at both 3 and 6 months compared with controls. Lower educational attainment, current cocaine use, past use of nicotine patches, and higher distress tolerance were significant predictors of continued smoking at 6 months.

Conclusions: These findings suggest a role for group therapy among tobacco treatments for PLWH smokers, but strategies to augment the durability of early effects are needed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/QAI.0000000000002271DOI Listing
April 2020

Effectiveness of clinical decision support to enhance delivery of family planning services in primary care settings.

Contraception 2020 03 17;101(3):199-204. Epub 2019 Dec 17.

Institute for Family Health, 2006 Madison Ave, New York, NY 10035, United States.

Purpose: There is a need to improve delivery of family planning services, including preconception and contraception services, in primary care. We assessed whether a clinician-facing clinical decision support implemented in a family medicine staffed primary care network improved provision of family planning services for reproductive-aged female patients, and differed in effect for certain patients or clinical settings.

Methods: We conducted a pragmatic study with difference-in-differences design to estimate, at the visit-level, the clinical decision support's effect on documenting the provision of family planning services 52 weeks prior to and after implementation. We also used logistic regression with a sample subset to evaluate intervention effect on the patient-level.

Results: 27,817 eligible patients made 91,185 visits during the study period. Overall, unadjusted documentation of family planning services increased by 2.7 percentage points (55.7% pre-intervention to 58.4% intervention). In the adjusted analysis, documentation increased by 3.4 percentage points (95% CI: 2.24, 4.63). The intervention effect varied across sites at the visit-level, ranging from a -1.2 to +6.5 percentage point change. Modification of effect by race, insurance, and site were substantial, but not by age group nor ethnicity. Additionally, patient-level subset analysis showed that those exposed to the intervention had 1.26 times the odds of having family planning services documented after implementation compared to controls (95% CI: 1.17, 1.36).

Conclusions: This clinical decision support modestly improved documentation of family planning services in our primary care network; effect varied across sites.

Implications: Integrating a family planning services clinical decision support into the electronic medical record at primary care sites may increase the provision of preconception and/or contraception services for women of reproductive age. Further study should explore intervention effect at sites with lower initial provision of family planning services.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.contraception.2019.11.002DOI Listing
March 2020

Government data . ground observation for food-environment assessment: businesses missed and misreported by city and state inspection records.

Public Health Nutr 2020 06 4;23(8):1414-1427. Epub 2019 Nov 4.

Department of Population Health, NYU School of Medicine, New York, NY, USA.

Objective: To assess the accuracy of government inspection records, relative to ground observation, for identifying businesses offering foods/drinks.

Design: Agreement between city and state inspection records v. ground observations at two levels: businesses and street segments. Agreement could be 'strict' (by business name, e.g. 'Rizzo's') or 'lenient' (by business type, e.g. 'pizzeria'); using sensitivity and positive predictive value (PPV) for businesses and using sensitivity, PPV, specificity and negative predictive value (NPV) for street segments.

Setting: The Bronx and the Upper East Side (UES), New York City, USA.

Participants: All food/drink-offering businesses on sampled street segments (n 154 in the Bronx, n 51 in the UES).

Results: By 'strict' criteria, sensitivity and PPV of government records for food/drink-offering businesses were 0·37 and 0·57 in the Bronx; 0·58 and 0·60 in the UES. 'Lenient' values were 0·40 and 0·62 in the Bronx; 0·60 and 0·62 in the UES. Sensitivity, PPV, specificity and NPV of government records for street segments having food/drink-offering businesses were 0·66, 0·73, 0·84 and 0·79 in the Bronx; 0·79, 0·92, 0·67, and 0·40 in the UES. In both areas, agreement varied by business category: restaurants; 'food stores'; and government-recognized other storefront businesses ('gov. OSB', i.e. dollar stores, gas stations, pharmacies). Additional business categories - 'other OSB' (barbers, laundromats, newsstands, etc.) and street vendors - were absent from government records; together, they represented 28·4 % of all food/drink-offering businesses in the Bronx, 22·2 % in the UES ('other OSB' and street vendors were sources of both healthful and less-healthful foods/drinks in both areas).

Conclusions: Government records frequently miss or misrepresent businesses offering foods/drinks, suggesting caveats for food-environment assessments using such records.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1017/S1368980019002982DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7196000PMC
June 2020

Simulation of Chemotherapy Effects in Older Breast Cancer Patients With High Recurrence Scores.

J Natl Cancer Inst 2020 06;112(6):574-581

Department of Oncology, Georgetown University Medical Center, Lombardi Comprehensive Cancer Center, Cancer Prevention and Control Program, Washington, DC.

Background: Tumor genomic expression profile data are used to guide chemotherapy choice, but there are gaps in evidence for women aged 65 years and older. We estimate chemotherapy effects by age and comorbidity level among women with early-stage, hormone receptor-positive, human epidermal growth factor receptor 2 (HER2)-negative breast cancers and Oncotype DX scores of 26 or higher.

Methods: A discrete-time stochastic state transition simulation model synthesized data from population studies and clinical trials to estimate outcomes over a 25-year horizon for subgroups based on age (65-69, 70-74, 75-79, and 80-89 years) and comorbidity levels (no or low, moderate, severe). Outcomes were discounted at 3%, and included quality-adjusted life-years (QALYs), life-years, and breast cancer and other-cause mortality with chemoendocrine vs endocrine therapy. Sensitivity analysis tested the effect of varying uncertain parameters.

Results: Women aged 65-69 years with no or low comorbidity gained 0.16 QALYs with chemo-endocrine and reduced breast cancer mortality from 34.8% to 29.7%, for an absolute difference of 5.1%; this benefit was associated with a 12.8% rate of grade 3-4 toxicity. Women aged 65-69 years with no or low or moderate comorbidity levels, and women aged 70-74 years with no or low comorbidity had small chemotherapy benefits. All women aged 75 years and older experienced net losses in QALYs with chemo-endocrine therapy. The results were robust in sensitivity analyses. Chemotherapy had greater benefits as treatment effectiveness increased, but toxicity reduced the QALYs gained.

Conclusion: Among women aged 65-89 years whose tumors indicate a high recurrence risk, only those aged 65-74 years with no or low or moderate comorbidity have small benefits from adding chemotherapy to endocrine therapy. Genomic expression profile testing (and chemotherapy use) should be reserved for women aged younger than 75 years without severe comorbidity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jnci/djz189DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7301154PMC
June 2020

Long-Term Outcomes and Cost-Effectiveness of Breast Cancer Screening With Digital Breast Tomosynthesis in the United States.

J Natl Cancer Inst 2020 06;112(6):582-589

Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA.

Background: Digital breast tomosynthesis (DBT) is increasingly being used for routine breast cancer screening. We projected the long-term impact and cost-effectiveness of DBT compared to conventional digital mammography (DM) for breast cancer screening in the United States.

Methods: Three Cancer Intervention and Surveillance Modeling Network breast cancer models simulated US women ages 40 years and older undergoing breast cancer screening with either DBT or DM starting in 2011 and continuing for the lifetime of the cohort. Screening performance estimates were based on observational data; in an alternative scenario, we assumed 4% higher sensitivity for DBT. Analyses used federal payer perspective; costs and utilities were discounted at 3% annually. Outcomes included breast cancer deaths, quality-adjusted life-years (QALYs), false-positive examinations, costs, and incremental cost-effectiveness ratios (ICERs).

Results: Compared to DM, DBT screening resulted in a slight reduction in breast cancer deaths (range across models 0-0.21 per 1000 women), small increase in QALYs (1.97-3.27 per 1000 women), and a 24-28% reduction in false-positive exams (237-268 per 1000 women) relative to DM. ICERs ranged from $195 026 to $270 135 per QALY for DBT relative to DM. When assuming 4% higher DBT sensitivity, ICERs decreased to $130 533-$156 624 per QALY. ICERs were sensitive to DBT costs, decreasing to $78 731 to $168 883 and $52 918 to $118 048 when the additional cost of DBT was reduced to $36 and $26 (from baseline of $56), respectively.

Conclusion: DBT reduces false-positive exams while achieving similar or slightly improved health benefits. At current reimbursement rates, the additional costs of DBT screening are likely high relative to the benefits gained; however, DBT could be cost-effective at lower screening costs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jnci/djz184DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7301096PMC
June 2020

Risk of Wrong-Patient Orders Among Multiple vs Singleton Births in the Neonatal Intensive Care Units of 2 Integrated Health Care Systems.

JAMA Pediatr 2019 Aug 26. Epub 2019 Aug 26.

Division of General Internal Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts.

Importance: Multiple-birth infants in neonatal intensive care units (NICUs) have nearly identical patient identifiers and may be at greater risk of wrong-patient order errors compared with singleton-birth infants.

Objectives: To assess the risk of wrong-patient orders among multiple-birth infants and singletons receiving care in the NICU and to examine the proportion of wrong-patient orders between multiple-birth infants and siblings (intrafamilial errors) and between multiple-birth infants and nonsiblings (extrafamilial errors).

Design, Setting, And Participants: A retrospective cohort study was conducted in 6 NICUs of 2 large, integrated health care systems in New York City that used distinct temporary names for newborns per the requirements of The Joint Commission. Data were collected from 4 NICUs at New York-Presbyterian Hospital from January 1, 2012, to December 31, 2015, and 2 NICUs at Montefiore Health System from July 1, 2013, to June 30, 2015. Data were analyzed from May 1, 2017, to December 31, 2017. All infants in the 6 NICUs for whom electronic orders were placed during the study periods were included.

Main Outcomes And Measures: Wrong-patient electronic orders were identified using the Wrong-Patient Retract-and-Reorder (RAR) Measure. This measure was used to detect RAR events, which are defined as 1 or more orders placed for a patient that are retracted (ie, canceled) by the same clinician within 10 minutes, then reordered by the same clinician for a different patient within the next 10 minutes.

Results: A total of 10 819 infants were included: 85.5% were singleton-birth infants and 14.5% were multiple-birth infants (male, 55.8%; female, 44.2%). The overall wrong-patient order rate was significantly higher among multiple-birth infants than among singleton-birth infants (66.0 vs 41.7 RAR events per 100 000 orders, respectively; adjusted odds ratio, 1.75; 95% CI, 1.39-2.20; P < .001). The rate of extrafamilial RAR events among multiple-birth infants (36.1 per 100 000 orders) was similar to that of singleton-birth infants (41.7 per 100 000 orders). The excess risk among multiple-birth infants (29.9 per 100 000 orders) appears to be owing to intrafamilial RAR events. The risk increased as the number of siblings receiving care in the NICU increased; a wrong-patient order error occurred in 1 in 7 sets of twin births and in 1 in 3 sets of higher-order multiple births.

Conclusions And Relevance: This study suggests that multiple-birth status in the NICU is associated with significantly increased risk of wrong-patient orders compared with singleton-birth status. This excess risk appears to be owing to misidentification between siblings. These results suggest that a distinct naming convention as required by The Joint Commission may provide insufficient protection against identification errors among multiple-birth infants. Strategies to reduce this risk include using given names at birth, changing from temporary to given names when available, and encouraging parents to select names for multiple births before they are born when acceptable to families.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamapediatrics.2019.2733DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6714004PMC
August 2019

Expected Monetary Impact of Oncotype DX Score-Concordant Systemic Breast Cancer Therapy Based on the TAILORx Trial.

J Natl Cancer Inst 2020 02;112(2):154-160

Department of Oncology, Georgetown University Medical Center and Cancer Prevention and Control Program, Georgetown-Lombardi Comprehensive Cancer Center, Washington, DC.

Background: TAILORx demonstrated that women with node-negative, hormone receptor-positive, HER2-negative breast cancers and Oncotype DX recurrence scores (RS) of 0-25 had similar 9-year outcomes with endocrine vs chemo-endocrine therapy; evidence for women aged 50 years and younger and RS 16-25 was less clear. We estimated how expected changes in practice following the trial might affect US costs in the initial 12 months of care (initial costs).

Methods: Data from Surveillance, Epidemiology, and End Results (SEER), SEER-Medicare, and SEER-Genomic Health Inc datasets were used to estimate Oncotype DX testing and chemotherapy rates and mean initial costs pre- and post-TAILORx (in 2018 dollars), assuming all women received Oncotype DX testing and score-suggested therapy posttrial. Sensitivity analyses tested the impact on costs of assumptions about compliance with testing and score-suggested treatment and estimation methods.

Results: Pretrial mean initial costs were $2.816 billion. Posttrial, Oncotype DX testing costs were projected to increase from $115 to $231 million and chemotherapy use to decrease from 25% to 17%, resulting in initial care costs of $2.766 billion, or a net savings of $49 million (1.8% decrease). A small net savings was seen under most assumptions. The one exception was if all women aged 50 years and younger with tumors with RS 16-25 elected to receive chemotherapy, initial care costs could increase by $105 million (4% increase).

Conclusions: Personalizing breast cancer treatment based on tumor genetic profiles could result in small cost decreases in the initial 12 months of care. Studies are needed to evaluate the long-term costs and nonmonetary benefits of personalized cancer care.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/jnci/djz068DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7019096PMC
February 2020

Evaluation of a Vertical Box Plating Technique for Mandibular Body Fractures and Retrospective Analysis of Patient Outcomes.

JAMA Facial Plast Surg 2019 Jul;21(4):271-276

Department of Otorhinolaryngology-Head and Neck Surgery, Jacobi Medical Center, Albert Einstein College of Medicine, Bronx, New York.

Importance: Despite advancements, treatment of mandibular body fractures is plagued by complications. Evaluation of a new plating system is needed with the goal of reducing complication rates.

Objectives: To evaluate the biomechanical behavior of a vertically oriented box plate vs traditional rigid internal fixation plating techniques for mandibular body fractures and to test if placement of the 3-dimensional plate oriented parallel to the fracture line provides improved rigidity and greater resistance to torsion, resulting in improved outcomes.

Design, Setting, And Participants: A mandible fracture model with synthetic replicas was used to compare resistance to torsional forces of different plating configurations. Additionally, a retrospective comparative review of the medical records of 84 patients with mandibular body fractures treated from 2005 to 2018 at Jacobi Medical Center, a level-1 trauma hospital in Bronx, New York, was completed.

Exposures: Patients sustained a mandibular body fracture and were treated with open reduction and internal fixation using metal plating.

Main Outcomes And Measures: In the comparative study of biomechanical behavior of various plating configurations, maximum torque sustained prior to deformation and loss of alignment was measured. Medical records were reviewed for surgical approach, plating techniques, operative time, length of admission, and rate of complications, including malocclusion, nonunion, infection, neurosensory disturbance, and wound dehiscence.

Results: Of the 84 patients included in the retrospective review, 76 (91%) were men, and the mean (SD) age was 29.7 (12.0) years. During biomechanical analysis, the vertical box plate provided greater stability and 150% of the resistance against torsional forces when compared with traditional linear plating. In the retrospective review, analysis showed vertical plating was associated with a lower incidence of postoperative neurosensory disturbance (25 [38%] patients treated with vertical plating vs 0 patients treated with box plating; P = .002) and a lower risk of any complication (41 [62%] vs 6 [33%], respectively; relative risk, 0.54; 95% CI, 0.27-1.06; P = .03). Vertical plating was associated with reduced operative time (134 minutes vs 70 minutes, respectively; P < .001).

Conclusions And Relevance: This investigation suggests that vertical box plating is associated with a lower incidence of postoperative complications and reduced operative time compared with traditional plating techniques. The comparative biomechanical component demonstrated that the vertical box plate offered equal or greater resistance to torsional forces. Further studies of greater power and level of evidence are needed to more robustly demonstrate these benefits.

Level Of Evidence: 3.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamafacial.2019.0057DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6547119PMC
July 2019

Effect of Restriction of the Number of Concurrently Open Records in an Electronic Health Record on Wrong-Patient Order Errors: A Randomized Clinical Trial.

JAMA 2019 05;321(18):1780-1787

Division of Hospital Medicine, Department of Medicine, Albert Einstein College of Medicine, Montefiore Health System, Bronx, New York.

Importance: Recommendations in the United States suggest limiting the number of patient records displayed in an electronic health record (EHR) to 1 at a time, although little evidence supports this recommendation.

Objective: To assess the risk of wrong-patient orders in an EHR configuration limiting clinicians to 1 record vs allowing up to 4 records opened concurrently.

Design, Setting, And Participants: This randomized clinical trial included 3356 clinicians at a large health system in New York and was conducted from October 2015 to April 2017 in emergency department, inpatient, and outpatient settings.

Interventions: Clinicians were randomly assigned in a 1:1 ratio to an EHR configuration limiting to 1 patient record open at a time (restricted; n = 1669) or allowing up to 4 records open concurrently (unrestricted; n = 1687).

Main Outcomes And Measures: The unit of analysis was the order session, a series of orders placed by a clinician for a single patient. The primary outcome was order sessions that included 1 or more wrong-patient orders identified by the Wrong-Patient Retract-and-Reorder measure (an electronic query that identifies orders placed for a patient, retracted, and then reordered shortly thereafter by the same clinician for a different patient).

Results: Among the 3356 clinicians who were randomized (mean [SD] age, 43.1 [12.5] years; mean [SD] experience at study site, 6.5 [6.0] years; 1894 females [56.4%]), all provided order data and were included in the analysis. The study included 12 140 298 orders, in 4 486 631 order sessions, placed for 543 490 patients. There was no significant difference in wrong-patient order sessions per 100 000 in the restricted vs unrestricted group, respectively, overall (90.7 vs 88.0; odds ratio [OR], 1.03 [95% CI, 0.90-1.20]; P = .60) or in any setting (ED: 157.8 vs 161.3, OR, 1.00 [95% CI, 0.83-1.20], P = .96; inpatient: 185.6 vs 185.1, OR, 0.99 [95% CI, 0.89-1.11]; P = .86; or outpatient: 7.9 vs 8.2, OR, 0.94 [95% CI, 0.70-1.28], P = .71). The effect did not differ among settings (P for interaction = .99). In the unrestricted group overall, 66.2% of the order sessions were completed with 1 record open, including 34.5% of ED, 53.7% of inpatient, and 83.4% of outpatient order sessions.

Conclusions And Relevance: A strategy that limited clinicians to 1 EHR patient record open compared with a strategy that allowed up to 4 records open concurrently did not reduce the proportion of wrong-patient order errors. However, clinicians in the unrestricted group placed most orders with a single record open, limiting the power of the study to determine whether reducing the number of records open when placing orders reduces the risk of wrong-patient order errors.

Trial Registration: clinicaltrials.gov Identifier: NCT02876588.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jama.2019.3698DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6518341PMC
May 2019

Diagnosis of Urinary Tract Infections by Urine Flow Cytometry: Adjusted Cut-Off Values in Different Clinical Presentations.

Dis Markers 2019 3;2019:5853486. Epub 2019 Mar 3.

Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.

Background: Bacterium and leucocyte counts in urine can be measured by urine flow cytometry (UFC). They are used to predict significant bacterial growth in urine culture and to diagnose infections of the urinary tract. However, little information is available on appropriate UFC cut-off values for bacterium and leucocyte counts in specific clinical presentations.

Objective: To develop, validate, and evaluate adapted cut-off values that result in a high negative predictive value for significant bacterial growth in urine culture in common clinical presentation subgroups.

Methods: This is a single center, retrospective, observational study with data from patients of the emergency department of Bern University Hospital, Switzerland, with suspected infections of the urinary tract. The patients presented with different symptoms, and urine culture and urine flow cytometry were performed. For different clinical presentations, the patients were grouped by (i) age (>65 years), (ii) sex, (iii) clinical symptoms (e.g., fever or dysuria), and (iv) comorbidities such as diabetes and immunosuppression. For each group, cut-off values were developed, validated, and analyzed using different strategies, i.e., linear discriminant analysis (LDA) and Youden's index, and were compared with known cut-offs and cut-offs optimized for sensitivity.

Results: 613 patients were included in the study. Significant bacterial growth in urine culture depended on clinical presentation and ranged from 32.3% in male patients to 61.5% in patients with urinary frequency. In all clinical presentations, the predictive accuracy of UFC leucocyte and UFC bacterium counts was good for significant bacterial growth in urine culture (AUC ≥ 0.88). The adapted LDA equations did not exhibit consistently high sensitivity. However, the in-house cut-offs (test positive if UFC leucocytes > 17/L or UFC bacteria > 125/L) were highly sensitive (>90%). In female, younger, and dysuric patients, even higher cut-offs for UFC leucocytes (169/L, 169/L, and 205/L) exhibited high sensitivity. Specificity was insufficient (<0.9) for all tested cut-offs.

Conclusions: For various clinical presentations, significant bacterial growth in urine culture can be excluded if flow cytometry measurements give a bacterial count of ≤125/L or a leucocyte count of ≤17/L. In female patients, dysuric patients, and patients younger than ≤65 years, the leucocyte cut-off can be increased to 170/L.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1155/2019/5853486DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6421762PMC
July 2019

IMPACT OF MULTIDISCIPLINARY PROCESS IMPROVEMENT INTERVENTIONS ON GLUCOMETRICS IN A NONCRITICALLY ILL SETTING.

Endocr Pract 2019 Jul 13;25(7):689-697. Epub 2019 Mar 13.

This study aimed to assess the impact of multidisciplinary process improvement interventions on glycemic control in the inpatient setting of an urban community hospital, utilizing the daily simple average as the primary glucometric measure. From 2010-2014, five process of care interventions were implemented in the noncritical care inpatient units of the study hospital. Interventions included education of medical staff, implementation of hyperglycemia and hypoglycemia protocols, computerized insulin order entry, and coordination of meal tray delivery with finger stick and insulin administration. Unpaired tests compared pre- and postintervention process measures. Simple average daily glucose measure was the primary glucometric outcome. Secondary outcome measures included frequency of hyperglycemia and hypoglycemia. Glucose outcomes were compared with an in-network hospital that did not implement the respective interventions. A total of 180,431 glucose measurements were reported from 4,705 and 4,238 patients from the intervention and comparison hospitals, respectively. The time between bolus-insulin administration and breakfast tray delivery was significantly reduced by 81.7 minutes (<.00005). The use of sliding scale insulin was sustainably reduced. Average daily glucose was reduced at both hospitals, and overall rates of hypoglycemia were low. A multidisciplinary approach at an urban community hospital with limited resources was effective in improving and sustaining processes of care for improved glycemic control in the noncritical care, inpatient setting. = interquartile range; = Jacobi Medical Center; = North Central Bronx Hospital.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.4158/EP-2018-0497DOI Listing
July 2019

Evaluation of a program to improve intermediate diabetes outcomes in rural communities in the Dominican Republic.

Diabetes Res Clin Pract 2019 Feb 11;148:212-221. Epub 2019 Jan 11.

Creighton University, School of Medicine, Education Building, Suite 105, 7710 Mercy Road, Omaha, NE 68124, USA.

Aims: To describe implementation of diabetes and hypertension program in rural Dominican Republic (DR), and report six years of quality improvement process and health outcomes.

Methods: Dominican teams at two clinics are supported by Chronic Care International with: supervision and continuing education, electronic database, diabetes and hypertension protocols, medications, self-management education materials, behavior change techniques, and equipment and testing supplies (e.g., HbA1c, lipids, blood pressure, BMI). A monthly dashboard for care processes and health outcomes guides problem solving and goal setting. Results were analyzed for quality improvement reports and by fitting the clinical data to random-effects linear models.

Results: 1191 adults were enrolled in the program at two clinics (44% men, baseline means: 56.4 years, BMI 27.4 kg/m, HbA1c 8.8% (73 mmol/mol), BP 133/81 mmHg). Data show steady growth in clinic populations reaching capacity. Protocols for comprehensive foot examinations, BP and HbA1c assessments, and proportions reaching quality measures improved over time, especially after clinic goal setting. Modeling of BP, BMI and HbA1c values revealed important differences in outcomes by clinic over time.

Conclusions: Improvements in process and health outcomes are attainable in rural DR when medical teams have support and access to data. Scalability and sustainability are continuing goals.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.diabres.2019.01.010DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6394404PMC
February 2019

A Quantitative Clinical Decision-Support Strategy Identifying Which Patients With Oropharyngeal Head and Neck Cancer May Benefit the Most From Proton Radiation Therapy.

Int J Radiat Oncol Biol Phys 2019 07 26;104(3):540-552. Epub 2018 Nov 26.

Institute for Onco-Physics, Albert Einstein College of Medicine, Bronx, New York; Department of Radiation Oncology, Montefiore Medical Center, Bronx, New York; Department of Neurology, Albert Einstein College of Medicine, Bronx, New York. Electronic address:

Purpose: Developing a quantitative decision-support strategy estimating the impact of normal tissue complications from definitive radiation therapy (RT) for head and neck cancer (HNC). We developed this strategy to identify patients with oropharyngeal HNC who may benefit most from receiving proton RT.

Methods And Materials: Recent normal tissue complication probability (NTCP) models for dysphagia, esophagitis, hypothyroidism, xerostomia, and oral mucositis were used to estimate NTCP for 33 patients with oropharyngeal HNC previously treated with photon intensity modulated radiation therapy (IMRT). Comparative proton therapy plans were generated using clinical protocols for HNC RT at a collaborating proton center. Organ-at-risk (OAR) doses from photon and proton RT plans were used to calculate NTCPs; Monte Carlo sampling 10,000 times was used for each patient to account for model parameter uncertainty. The latency and duration of each complication were modeled from calculated NTCP, accounting for age-, sex-, smoking- and p16-specific conditional survival probability. Complications were then assigned quality-adjustment factors based on severity to calculate quality-adjusted life years (QALYs) lost from each complication.

Results: Based on our institutional-delivered photon IMRT doses and the achievable proton therapy doses, the average QALY reduction from all HNC RT complications for photon and proton therapy was 1.52 QALYs versus 1.15 QALYs, with proton therapy sparing 0.37 QALYs on average (composite 95% confidence interval, 0.27-2.53 QALYs). Long-term complications (dysphagia and xerostomia) contributed most to the QALY reduction. The QALYs spared with proton RT varied considerably among patients, ranging from 0.06 to 0.84 QALYs. Younger patients with p16-positive tumors who smoked ≤10 pack-years may benefit most from proton therapy, although this finding should be validated using larger patient series. A sensitivity analysis reducing photon IMRT doses to all OARs by 20% resulted in no overall estimated benefit with proton therapy with -0.02 QALYs spared, although some patients still had an estimated benefit in this scenario, ranging from -0.50 to 0.43 QALYs spared.

Conclusions: This quantitative decision-support strategy allowed us to identify patients with oropharyngeal cancer who might benefit the most from proton RT, although the estimated benefit of proton therapy ultimately depends on the OAR doses achievable with modern photon IMRT solutions. These results can help radiation oncologists and proton therapy centers optimize resource allocation and improve quality of life for patients with HNC.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ijrobp.2018.11.039DOI Listing
July 2019

Foods and Drinks Available from Urban Food Pantries: Nutritional Quality by Item Type, Sourcing, and Distribution Method.

J Community Health 2019 04;44(2):339-364

Department of Family and Social Medicine, Albert Einstein College of Medicine, Montefiore Health System, 1300 Morris Park Ave, Block Building, Room 408, Bronx, NY, 10461-1900, USA.

The overall nutritional quality of foods/drinks available at urban food pantries is not well established. In a study of 50 pantries listed as operating in the Bronx, NY, data on food/drink type (fresh, shelf-stable, refrigerated/frozen) came from direct observation. Data on food/drink sourcing (food bank or other) and distribution (prefilled bag vs. client choice for a given client's position in line) came from semi-structured interviews with pantry workers. Overall nutritional quality was determined using NuVal® scores (range 1-100; higher score indicates higher nutritional quality). Twenty-nine pantries offered zero nutrition at listed times (actually being closed or having no food/drinks in stock). Of the 21 pantries that were open as listed and had foods/drinks to offer, 12 distributed items in prefilled bags (traditional pantries), 9 allowed for client choice. Mean NuVal® scores were higher for foods/drinks available from client-choice pantries than traditional pantries (69.3 vs. 57.4), driven mostly by sourcing fresh items (at 28.3% of client-choice pantries vs. 4.8% of traditional pantries). For a hypothetical 'balanced basket' of one of each fruit, vegetable, grain, dairy and protein item, highest-NuVal® items had a mean score of 98.8 across client-choice pantries versus 96.6 across traditional pantries; lowest-NuVal® items had mean scores of 16.4 and 35.4 respectively. Pantry workers reported lower-scoring items (e.g., white rice) were more popular-appeared in early bags or were selected first-leaving higher-scoring items (e.g., brown rice) for clients later in line. Fewer than 50% of sampled pantries were open and had food/drink to offer at listed times. Nutritional quality varied by item type and sourcing and could also vary by distribution method and client position in line. Findings suggest opportunities for pantry operation, client and staff education, and additional research.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10900-018-0592-zDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6414256PMC
April 2019