Publications by authors named "David Moher"

636 Publications

Implementing the 27 PRISMA 2020 Statement items for systematic reviews in the sport and exercise medicine, musculoskeletal rehabilitation and sports science fields: the PERSiST (implementing Prisma in Exercise, Rehabilitation, Sport medicine and SporTs science) guidance.

Br J Sports Med 2021 Oct 8. Epub 2021 Oct 8.

Center for General Practice, Aalborg University, Aalborg, Denmark.

Poor reporting of medical and healthcare systematic reviews is a problem from which the sports and exercise medicine, musculoskeletal rehabilitation, and sports science fields are not immune. Transparent, accurate and comprehensive systematic review reporting helps researchers replicate methods, readers understand what was done and why, and clinicians and policy-makers implement results in practice. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement and its accompanying Explanation and Elaboration document provide general reporting examples for systematic reviews of healthcare interventions. However, implementation guidance for sport and exercise medicine, musculoskeletal rehabilitation, and sports science does not exist. The Prisma in Exercise, Rehabilitation, Sport medicine and SporTs science (PERSiST) guidance attempts to address this problem. Nineteen content experts collaborated with three methods experts to identify examples of exemplary reporting in systematic reviews in sport and exercise medicine (including physical activity), musculoskeletal rehabilitation (including physiotherapy), and sports science, for each of the PRISMA 2020 Statement items. PERSiST aims to help: (1) systematic reviewers improve the transparency and reporting of systematic reviews and (2) journal editors and peer reviewers make informed decisions about systematic review reporting quality.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1136/bjsports-2021-103987DOI Listing
October 2021

Reporting Transparency and Completeness in Trials: Paper 4 - Reporting of randomised controlled trials conducted using routinely collected electronic records - room for improvement.

J Clin Epidemiol 2021 Sep 12. Epub 2021 Sep 12.

National Perinatal Epidemiology Unit Clinical Trials Unit, Nuffield Department of Population Health, University of Oxford, Oxford, United Kingdom; Nottingham Clinical Trials Unit, University of Nottingham, Building 42, University Park, Nottingham, United Kingdom. Electronic address:

Objective: To describe characteristics of randomised controlled trials (RCTs) conducted using electronic health records (EHRs), including completeness and transparency of reporting assessed against the 2021 CONSORT Extension for RCTs Conducted Using Cohorts and Routinely Collected Data (CONSORT-ROUTINE) criteria.

Study Design: MEDLINE and Cochrane Methodology Register were searched for a sample of RCTs published from 2011-2018. Completeness of reporting was assessed in a random sample using a pre-defined coding form.

Results: 183 RCT publications were identified; 122 (67%) used EHRs to identify eligible participants, 139 (76%) used the EHR as part of the intervention and 137 (75%) to ascertain outcomes. When 60 publications were evaluated against the CONSORT 2010 item and the corresponding extension for the 8 modified items, four items were 'adequately reported' for the majority of trials. Five new reporting items were identified for the CONSORT-ROUTINE extension; when evaluated, one was 'adequately reported', three were reported 'inadequately or not at all', the other 'partially'. There were, however, some encouraging signs with adequate and partial reporting of many important items, including descriptions of trial design, the consent process, outcome ascertainment and interpretation.

Conclusion: Aspects of RCTs using EHRs are sub-optimally reported. Uptake of the CONSORT-ROUTINE Extension may improve reporting.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jclinepi.2021.09.011DOI Listing
September 2021

Reporting Transparency and Completeness in Trials: Paper 2 - Reporting of randomised trials using registries was often inadequate and hindered the interpretation of results.

J Clin Epidemiol 2021 Sep 12. Epub 2021 Sep 12.

Basel Institute for Clinical Epidemiology and Biostatistics, Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland; Meta-Research Innovation Center Berlin (METRIC-B), Berlin Institute of Health, Berlin, Germany; Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, USA.

Objective: Registries are important data sources for randomised controlled trials (RCTs), but reporting of how they are used may be inadequate. The objective was to describe the current adequacy of reporting of RCTs using registries.

Study Design And Setting: We used a database of trials using registries from a scoping review supporting the development of the 2021 CONSORT extension for Trials Conducted Using Cohorts and Routinely Collected Data (CONSORT-ROUTINE). Reporting completeness of 13 CONSORT-ROUTINE items was assessed.

Results: We assessed reports of 47 RCTs that used a registry, published between 2011 and 2018. Of the 13 CONSORT-ROUTINE items, 6 were adequately reported in at least half of reports (2 in at least 80%). The 7 other items were related to routinely collected data source eligibility (32% adequate), data linkage (8% adequate), validation and completeness of data used for outcome assessment (8% adequate), validation and completeness of data used for participant recruitment (0% adequate), participant flow (9% adequate), registry funding (6% adequate) and interpretation of results in consideration of registry use (25% adequate).

Conclusion: Reporting of trials using registries was often poor, particularly details on data linkage and quality. Better reporting is needed for appropriate interpretation of the results of these trials.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jclinepi.2021.09.012DOI Listing
September 2021

Reporting transparency and completeness in trials: Paper 3 - trials conducted using administrative databases do not adequately report elements related to use of databases.

J Clin Epidemiol 2021 Sep 11. Epub 2021 Sep 11.

Biomedical Ethics Unit, McGill University, Montreal, Quebec, Canada.

Objective: We evaluated reporting completeness and transparency in randomized controlled trials (RCTs) conducted using administrative data based on 2021 CONSORT Extension for Trials Conducted Using Cohorts and Routinely Collected Data (CONSORT-ROUTINE) criteria.

Study Design And Setting: MEDLINE and the Cochrane Methodology Register were searched (2011 and 2018). Eligible RCTs used administrative databases for identifying eligible participants or collecting outcomes. We evaluated reporting based on CONSORT-ROUTINE, which modified eight items from CONSORT 2010 and added five new items.

Results: Of 33 included trials (76% used administrative databases for outcomes, 3% for identifying participants, 21% both), most were conducted in the United States (55%), Canada (18%), or the United Kingdom (12%). Of eight items modified in the extension; six were adequately reported in a majority (>50%) of trials. For the CONSORT-ROUTINE modification portion of those items, three items were reported adequately in >50% of trials, two in <50%, two only applied to some trials, and one only had wording modifications and was not evaluated. For five new items, four that address use of routine data in trials were reported inadequately in most trials.

Conclusion: How administrative data are used in trials is often sub-optimally reported. CONSORT-ROUTINE uptake may improve reporting.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jclinepi.2021.09.010DOI Listing
September 2021

Tranexamic acid for prevention of bleeding in cesarean delivery: An overview of systematic reviews.

Acta Anaesthesiol Scand 2021 Sep 12. Epub 2021 Sep 12.

Department of Anesthesiology and Pain Medicine, The Ottawa Hospital Research Institute, The Ottawa Hospital, General Campus, Ottawa, Ontario, Canada.

Background: Bleeding is the leading cause of maternal mortality in the world. Tranexamic acid reduces bleeding in trauma and surgery. Several systematic reviews of randomized trials have investigated tranexamic acid in the prevention of bleeding in cesarean delivery. However, the conclusions from systematic reviews are conflicting. This overview aims to summarize the evidence and explore the reasons for conflicting conclusions across the systematic reviews.

Methods: A comprehensive literature search of Medline, Embase, and Cochrane Database of Systematic Reviews was conducted from inception to April 2021. Screening, data extraction, and quality assessments were performed by two independent reviewers. A Measurement Tool to Assess Reviews 2 and the Risk of Bias Assessment Tool for Systematic Reviews were used for study appraisal. A qualitative synthesis of evidence is presented.

Results: In all, 14 systematic reviews were included in our analysis. Across these reviews, there were 32 relevant randomized trials. A modest reduction in blood transfusions and bleeding outcomes was found by most systematic reviews. Overall confidence in results varied from low to critically low. All of the included systematic reviews were at high risk of bias. Quality of evidence from randomized trials was uncertain.

Conclusions: Systematic reviews investigating prophylactic tranexamic acid in cesarean delivery are heterogeneous in terms of methodological and reporting quality. Tranexamic acid may reduce blood transfusion and bleeding outcomes, but rigorous well-designed research is needed due to the limitations of the included studies. Data on safety and adverse effects are insufficient to draw conclusions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/aas.13981DOI Listing
September 2021

Commentary: Reporting Guidelines for Studies on Artificial Intelligence: What Neurosurgeons Should Know.

Neurosurgery 2021 Aug 25. Epub 2021 Aug 25.

Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/neuros/nyab331DOI Listing
August 2021

Status, use and impact of sharing individual participant data from clinical trials: a scoping review.

BMJ Open 2021 08 18;11(8):e049228. Epub 2021 Aug 18.

CHU Rennes, INSERM CIC 1414 (Centre d'Investigation Clinique de Rennes), University Rennes, Rennes, Bretagne, France.

Objectives: To explore the impact of data-sharing initiatives on the intent to share data, on actual data sharing, on the use of shared data and on research output and impact of shared data.

Eligibility Criteria: All studies investigating data-sharing practices for individual participant data (IPD) from clinical trials.

Sources Of Evidence: We searched the Medline database, the Cochrane Library, the Science Citation Index Expanded and the Social Sciences Citation Index via Web of Science, and preprints and proceedings of the International Congress on Peer Review and Scientific Publication. In addition, we inspected major clinical trial data-sharing platforms, contacted major journals/publishers, editorial groups and some funders.

Charting Methods: Two reviewers independently extracted information on methods and results from resources identified using a standardised questionnaire. A map of the extracted data was constructed and accompanied by a narrative summary for each outcome domain.

Results: 93 studies identified in the literature search (published between 2001 and 2020, median: 2018) and 5 from additional information sources were included in the scoping review. Most studies were descriptive and focused on early phases of the data-sharing process. While the willingness to share IPD from clinical trials is extremely high, actual data-sharing rates are suboptimal. A survey of journal data suggests poor to moderate enforcement of the policies by publishers. Metrics provided by platforms suggest that a large majority of data remains unrequested. When requested, the purpose of the reuse is more often secondary analyses and meta-analyses, rarely re-analyses. Finally, studies focused on the real impact of data-sharing were rare and used surrogates such as citation metrics.

Conclusions: There is currently a gap in the evidence base for the impact of IPD sharing, which entails uncertainties in the implementation of current data-sharing policies. High level evidence is needed to assess whether the value of medical research increases with data-sharing practices.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1136/bmjopen-2021-049228DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8375721PMC
August 2021

Do overly complex reporting guidelines remove the focus from good clinical trials?

BMJ 2021 08 16;374:n1793. Epub 2021 Aug 16.

Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute University of Ottawa, Canada

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1136/bmj.n1793DOI Listing
August 2021

Commentary: Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 Statement: What Neurosurgeons Should Know.

Neurosurgery 2021 Oct;89(5):E267-E268

Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/neuros/nyab289DOI Listing
October 2021

PRISMA-S: an extension to the PRISMA statement for reporting literature searches in systematic reviews.

J Med Libr Assoc 2021 Apr;109(2):174-200

Emerging Technology and Innovation Strategist, University of Minnesota, Minneapolis, MN.

Background: Literature searches underlie the foundations of systematic reviews and related review types. Yet, the literature searching component of systematic reviews and related review types is often poorly reported. Guidance for literature search reporting has been diverse and, in many cases, does not offer enough detail to authors who need more specific information about reporting search methods and information sources in a clear, reproducible way. This document presents the PRISMA-S (Preferred Reporting Items for Systematic reviews and Meta-Analyses literature search extension) checklist, and explanation and elaboration.

Methods: The checklist was developed using a three-stage Delphi survey process, followed by a consensus conference and public review process.

Results: The final checklist includes sixteen reporting items, each of which is detailed with exemplar reporting and rationale.

Conclusions: The intent of PRISMA-S is to complement the PRISMA Statement and its extensions by providing a checklist that could be used by interdisciplinary authors, editors, and peer reviewers to verify that each component of a search is completely reported and, therefore, reproducible.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.5195/jmla.2021.962DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8270366PMC
April 2021

What difference might retractions make? An estimate of the potential epistemic cost of retractions on meta-analyses.

Account Res 2021 Jul 14:1-18. Epub 2021 Jul 14.

Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada.

The extent to which a retraction might require revising previous scientific estimates and beliefs - which we define as the epistemic cost - is unknown. We collected a sample of 229 meta-analyses published between 2013 and 2016 that had cited a retracted study, assessed whether this study was included in the meta-analytic estimate and, if so, re-calculated the summary effect size without it. The majority (68% of N = 229) of retractions had occurred at least one year prior to the publication of the citing meta-analysis. In 53% of these avoidable citations, the retracted study was cited as a candidate for inclusion, and only in 34% of these meta-analyses (13% of total) the study was explicitly excluded because it had been retracted. Meta-analyses that included retracted studies were published in journals with significantly lower impact factor. Summary estimates without the retracted study were lower than the original if the retraction was due to issues with data or results and higher otherwise, but the effect was small. We conclude that meta-analyses have a problematically high probability of citing retracted articles and of including them in their pooled summaries, but the overall epistemic cost is contained.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/08989621.2021.1947810DOI Listing
July 2021

Characteristics of Randomized Clinical Trials in Surgery From 2008 to 2020: A Systematic Review.

JAMA Netw Open 2021 Jun 1;4(6):e2114494. Epub 2021 Jun 1.

Department of Cardiothoracic Surgery, Weill Cornell Medicine, New York, New York.

Importance: Randomized clinical trials (RCTs) provide the highest level of evidence to evaluate 2 or more surgical interventions. Surgical RCTs, however, face unique challenges in design and implementation.

Objective: To evaluate the design, conduct, and reporting of contemporary surgical RCTs.

Evidence Review: A literature search performed in the 2 journals with the highest impact factor in general medicine as well as 6 key surgical specialties was conducted to identify RCTs published between 2008 and 2020. All RCTs describing a surgical intervention in both experimental and control arms were included. The quality of included data was assessed by establishing an a priori protocol containing all the details to extract. Trial characteristics, fragility index, risk of bias (Cochrane Risk of Bias 2 Tool), pragmatism (Pragmatic Explanatory Continuum Indicator Summary 2 [PRECIS-2]), and reporting bias were assessed.

Findings: A total of 388 trials were identified. Of them, 242 (62.4%) were registered; discrepancies with the published protocol were identified in 81 (33.5%). Most trials used superiority design (329 [84.8%]), and intention-to-treat as primary analysis (221 [56.9%]) and were designed to detect a large treatment effect (50.0%; interquartile range [IQR], 24.7%-63.3%). Only 123 trials (31.7%) used major clinical events as the primary outcome. Most trials (303 [78.1%]) did not control for surgeon experience; only 17 trials (4.4%) assessed the quality of the intervention. The median sample size was 122 patients (IQR, 70-245 patients). The median follow-up was 24 months (IQR, 12.0-32.0 months). Most trials (211 [54.4%]) had some concern of bias and 91 (23.5%) had high risk of bias. The mean (SD) PRECIS-2 score was 3.52 (0.65) and increased significantly over the study period. Most trials (212 [54.6%]) reported a neutral result; reporting bias was identified in 109 of 211 (51.7%). The median fragility index was 3.0 (IQR, 1.0-6.0). Multiplicity was detected in 175 trials (45.1%), and only 35 (20.0%) adjusted for multiple comparisons.

Conclusions And Relevance: In this systematic review, the size of contemporary surgical trials was small and the focus was on minor clinical events. Trial registration remained suboptimal and discrepancies with the published protocol and reporting bias were frequent. Few trials controlled for surgeon experience or assessed the quality of the intervention.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamanetworkopen.2021.14494DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8246313PMC
June 2021

Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol.

BMJ Open 2021 06 28;11(6):e047709. Epub 2021 Jun 28.

Division of Health Policy and Management, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA.

Introduction: Standards for Reporting of Diagnostic Accuracy Study (STARD) was developed to improve the completeness and transparency of reporting in studies investigating diagnostic test accuracy. However, its current form, STARD 2015 does not address the issues and challenges raised by artificial intelligence (AI)-centred interventions. As such, we propose an AI-specific version of the STARD checklist (STARD-AI), which focuses on the reporting of AI diagnostic test accuracy studies. This paper describes the methods that will be used to develop STARD-AI.

Methods And Analysis: The development of the STARD-AI checklist can be distilled into six stages. (1) A project organisation phase has been undertaken, during which a Project Team and a Steering Committee were established; (2) An item generation process has been completed following a literature review, a patient and public involvement and engagement exercise and an online scoping survey of international experts; (3) A three-round modified Delphi consensus methodology is underway, which will culminate in a teleconference consensus meeting of experts; (4) Thereafter, the Project Team will draft the initial STARD-AI checklist and the accompanying documents; (5) A piloting phase among expert users will be undertaken to identify items which are either unclear or missing. This process, consisting of surveys and semistructured interviews, will contribute towards the explanation and elaboration document and (6) On finalisation of the manuscripts, the group's efforts turn towards an organised dissemination and implementation strategy to maximise end-user adoption.

Ethics And Dissemination: Ethical approval has been granted by the Joint Research Compliance Office at Imperial College London (reference number: 19IC5679). A dissemination strategy will be aimed towards five groups of stakeholders: (1) academia, (2) policy, (3) guidelines and regulation, (4) industry and (5) public and non-specific stakeholders. We anticipate that dissemination will take place in Q3 of 2021.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1136/bmjopen-2020-047709DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8240576PMC
June 2021

Imputing intracluster correlation coefficients from a posterior predictive distribution is a feasible method of dealing with unit of analysis errors in a meta-analysis of cluster RCTs.

J Clin Epidemiol 2021 Jun 22. Epub 2021 Jun 22.

Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada; Department of Medicine, University of Ottawa, Ottawa, Canada.

Background: Incorporating cluster randomized trials (CRTs) into meta-analyses is challenging because appropriate standard errors of study estimates accounting for clustering are not always reported. Systematic reviews of CRTs often use a single constant external estimate of the intraclass correlation coefficient (ICC) to adjust study estimate standard errors and facilitate meta-analyses; an approach that fails to account for possible variation of ICCs among studies and the imprecision with which they are estimated. Using a large systematic review of the effects of diabetes quality improvement interventions, we investigated whether we could better account for ICC variation and uncertainty in meta-analyzed effect estimates by imputing missing ICCs from a posterior predictive distribution constructed from a database of relevant ICCs.

Methods: We constructed a dataset of ICC estimates from applicable studies. For outcomes with two or more available ICC estimates, we constructed posterior predictive ICC distributions in a Bayesian framework. For a selected continuous outcome, glycosylated hemoglobin (HbA1c), we compared the impact of incorporating a single constant ICC versus imputing ICCs drawn from the posterior predictive distribution when estimating the effect of intervention components on post treatment mean in a case study of diabetes quality improvement trials.

Results: Using internal and external ICC estimates, we were able to construct a database of 59 ICCs for 12 of the 13 review outcomes (range 1-10 per outcome) and estimate the posterior predictive ICC distribution for 11 review outcomes. Synthesized results were not markedly changed by our approach for HbA1c.

Conclusion: Building posterior predictive distributions to impute missing ICCs is a feasible approach to facilitate principled meta-analyses of cluster randomized trials using prior data. Further work is needed to establish whether the application of these methods leads to improved review inferences for different reviews based on different factors (e.g., proportion of CRTs and CRTs with missing ICCs, different outcomes, variation and precision of ICCs).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jclinepi.2021.06.011DOI Listing
June 2021

Guidelines for Reporting Trial Protocols and Completed Trials Modified Due to the COVID-19 Pandemic and Other Extenuating Circumstances: The CONSERVE 2021 Statement.

JAMA 2021 07;326(3):257-265

The Lancet , London, England.

Importance: Extenuating circumstances can trigger unplanned changes to randomized trials and introduce methodological, ethical, feasibility, and analytical challenges that can potentially compromise the validity of findings. Numerous randomized trials have required changes in response to the COVID-19 pandemic, but guidance for reporting such modifications is incomplete.

Objective: As a joint extension for the CONSORT and SPIRIT reporting guidelines, CONSERVE (CONSORT and SPIRIT Extension for RCTs Revised in Extenuating Circumstances) aims to improve reporting of trial protocols and completed trials that undergo important modifications in response to extenuating circumstances.

Evidence: A panel of 37 international trial investigators, patient representatives, methodologists and statisticians, ethicists, funders, regulators, and journal editors convened to develop the guideline. The panel developed CONSERVE following an accelerated, iterative process between June 2020 and February 2021 involving (1) a rapid literature review of multiple databases (OVID Medline, OVID EMBASE, and EBSCO CINAHL) and gray literature sources from 2003 to March 2021; (2) consensus-based panelist meetings using a modified Delphi process and surveys; and (3) a global survey of trial stakeholders.

Findings: The rapid review yielded 41 673 citations, of which 38 titles were relevant, including emerging guidance from regulatory and funding agencies for managing the effects of the COVID-19 pandemic on trials. However, no generalizable guidance for all circumstances in which trials and trial protocols might face unanticipated modifications were identified. The CONSERVE panel used these findings to develop a consensus reporting guidelines following 4 rounds of meetings and surveys. Responses were received from 198 professionals from 34 countries, of whom 90% (n = 178) indicated that they understood the concept definitions and 85.4% (n = 169) indicated that they understood and could use the implementation tool. Feedback from survey respondents was used to finalize the guideline and confirm that the guideline's core concepts were applicable and had utility for the trial community. CONSERVE incorporates an implementation tool and checklists tailored to trial reports and trial protocols for which extenuating circumstances have resulted in important modifications to the intended study procedures. The checklists include 4 sections capturing extenuating circumstances, important modifications, responsible parties, and interim data analyses.

Conclusions And Relevance: CONSERVE offers an extension to CONSORT and SPIRIT that could improve the transparency, quality, and completeness of reporting important modifications to trials in extenuating circumstances such as COVID-19.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jama.2021.9941DOI Listing
July 2021

Editors-in-chief perceptions of patients as (co) authors on publications and the acceptability of ICMJE authorship criteria: a cross-sectional survey.

Res Involv Engagem 2021 Jun 14;7(1):39. Epub 2021 Jun 14.

Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, K1H 8L6, Canada.

Background: Access to, and awareness of, appropriate authorship criteria is an important right for patient partners. Our objective was to measure medical journal Editors-in-Chief' perceptions of including patients as (co-)authors on research publications and to measure their views on the application of the ICMJE (International Committee of Medical Journals Editors) authorship criteria to patient partners.

Methods: We conducted a cross-sectional survey co-developed with a patient partner. Editors-in-Chief of English-language medical journals were identified via a random sample of journals obtained from the Scopus source list. The key outcome measures were whether Editors-in-Chief believed: 1) patient partners should be (co-)authors and; 2) whether they felt the ICMJE criteria for authorship required modification for use with patient partners. We also measured Editors-in-Chief description of how their journal's operations incorporate patient partner perspectives.

Results: One hundred twelve Editors-in-Chief responded to our survey (18.7% response rate; 66.69% male). Participants were able to skip any questions they did not want to answer, so there is missing data for some items. 69.2% (N = 74) of Editors-in-Chief indicated it was acceptable for patient partners to be authors or co-authors on published biomedical research articles, with the remaining 30.8% (N = 33) indicating this would not be appropriate. When asked specifically about the ICMJE authorship criteria, and whether this should be revised to be more inclusive of patient partners, 35.8% (N = 39) indicated it should be revised, 35.8% (N = 39) indicated it should not be revised, and 28.4% (N = 31) were unsure about a revision. 74.1% (N = 80) of Editors-in-Chief did not think patients should be required to have an academic affiliation to published while 16.7% (N = 18) and 9.3% (N = 10) indicated they should or were unsure. 3.6% (N = 4) of Editors-in-Chief indicated their journal had a policy that specifies how patients or patient partners should be considered as authors.

Conclusions: Our findings highlight gaps that may act as barriers to patient partner participation in research. A key implication is the need for education and for consensus building within the biomedical community to establish processes that will facilitate equitable patient partners inclusion.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s40900-021-00290-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8201727PMC
June 2021

The impact of gender on researchers' assessment: A randomized controlled trial.

J Clin Epidemiol 2021 Jun 9;138:95-101. Epub 2021 Jun 9.

Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, The Ottawa Hospital - General Campus, 501 Smyth Rd, Ottawa, Canada.

Objectives: This randomized controlled trial aimed to test whether women or men would be preferred with identical curriculum vitae (CV); and the impact of the career stage in the evaluators' choice.

Study Design And Setting: A simulated post-doctoral process was carried forward to be assessed for judgment. Level 1 and 2 Brazilian fellow researchers in the field of Dentistry were invited to act as external reviewers in a post-doctoral process and were randomly assigned to receive a CV from a woman or a man. They were required to rate the CV from 0 to 10 in scientific contribution, leadership potential, ability to work in groups, and international experience.

Results: For all categories of CVs evaluated, CVs from men received higher scores compared to the CVs from women. Robust variance Poisson regressions demonstrated that men were more likely to receive higher scores in all categories, despite applicants' career stage. For example, CVs from men were nearly three quarters more likely to be seen as having leadership potential than equivalent CVs from women.

Conclusion: Gender bias is powerfully prevalent in academia in the dentistry field, despite researchers' career stage. Actions like implicit bias training must be urgently implemented to avoid (or at least decrease) that more women are harmed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jclinepi.2021.05.026DOI Listing
June 2021

A Cross-sectional literature survey showed the reporting quality of multicenter randomized controlled trials should be improved.

J Clin Epidemiol 2021 Sep 21;137:250-261. Epub 2021 May 21.

Chinese EQUATOR Centre, Hong Kong Chinese Medicine Clinical Study Centre, Chinese Clinical Trial Registry (Hong Kong), School of Chinese Medicine, Hong Kong Baptist University, HKSAR, China. Electronic address:

Objective: To assess the reporting quality of randomized controlled trials (RCTs) with multicenter design, particularly whether necessary information related to multicenter characteristics was adequately reported.

Study Design And Setting: Through a search of 4 international electronic databases, we identified multicenter RCTs published in English from 1975 to 2019. Reporting quality was assessed by the CONSORT (Consolidated Standards of Reporting Trials) checklist (37 items) and by a self-designed multicenter-specific checklist (27 items covering multicenter design, implement and analysis). The scores of trials published in three time periods (1975-1995; 1996-2009; and 2010-2019) were also compared.

Results: A total of 2,844 multicenter RCTs were included. For the CONSORT checklist, the mean (standard deviation) reporting score was 24.1 (5.5), 12 items were assessed as excellent (>90%), 12 items as good (50%-90%), and 13 items as poor (<50%). For the multicenter checklist, the reporting score was 3.9 (2.2), only 3 items were excellent or good, and the remaining 24 items were poor. Time period comparison showed that reporting quality improved over time, especially after the CONSORT 2010 issued.

Conclusion: Although CONSORT appears to have enhanced the reporting quality of multicenter RCTs, further improvement is needed. A "CONSORT extension for multicenter trials" should be developed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jclinepi.2021.05.008DOI Listing
September 2021

Time to improve the reporting of harms in randomized controlled trials.

J Clin Epidemiol 2021 08 10;136:216-220. Epub 2021 May 10.

Faculty of Medicine & Dentistry, University of Alberta, Edmonton, Alberta, Canada. Electronic address:

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jclinepi.2021.04.020DOI Listing
August 2021

Top health research funders' guidance on selecting journals for funded research.

F1000Res 2021 11;10:100. Epub 2021 Feb 11.

School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, K1G 5Z3, Canada.

Funded health research is being published in journals that many regard as "predatory", deceptive, and non-credible. We do not currently know whether funders provide guidance on how to select a journal in which to publish funded health research. We identified the largest 46 philanthropic, public, development assistance, public-private partnership, and multilateral funders of health research by expenditure, globally as well as four public funders from lower-middle income countries, from the list at https://healthresearchfunders.org. One of us identified guidance on disseminating funded research from each funders' website (August/September 2017), then extracted information about selecting journals, which was verified by another assessor. Discrepancies were resolved by discussion. Results were summarized descriptively. This research used publicly available information; we did not seek verification with funding bodies. The majority (44/50) of sampled funders indicated funding health research. 38 (of 44, 86%) had publicly available information about disseminating funded research, typically called "policies" (29, 76%). Of these 38, 36 (95%) mentioned journal publication for dissemination of which 13 (36.11%) offer variable guidance on selecting a journal, all of which relate to the funder's open access mandate. Six funders (17%) outlined publisher requirements or features by which to select a journal. One funder linked to a document providing features of journals to look for (e.g. listed in the Directory of Open Access Journals) and to be wary of (e.g., no journal scope statement, uses direct and unsolicited marketing). Few funders provided guidance on how to select a journal in which to publish funded research. Funders have a duty to ensure that the research they fund is discoverable by others. This research is a benchmark for funder guidance on journal selection prior to the January 2021 implementation of Plan S (a global, funder-led initiative to ensure immediate, open access to funded, published research).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.12688/f1000research.27745.2DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8063518PMC
June 2021

Preferred reporting items for systematic reviews and meta-analyses in ecology and evolutionary biology: a PRISMA extension.

Biol Rev Camb Philos Soc 2021 10 7;96(5):1695-1722. Epub 2021 May 7.

Evolution & Ecology Research Centre and School of Biological and Environmental Sciences, University of New South Wales, Sydney, NSW, 2052, Australia.

Since the early 1990s, ecologists and evolutionary biologists have aggregated primary research using meta-analytic methods to understand ecological and evolutionary phenomena. Meta-analyses can resolve long-standing disputes, dispel spurious claims, and generate new research questions. At their worst, however, meta-analysis publications are wolves in sheep's clothing: subjective with biased conclusions, hidden under coats of objective authority. Conclusions can be rendered unreliable by inappropriate statistical methods, problems with the methods used to select primary research, or problems within the primary research itself. Because of these risks, meta-analyses are increasingly conducted as part of systematic reviews, which use structured, transparent, and reproducible methods to collate and summarise evidence. For readers to determine whether the conclusions from a systematic review or meta-analysis should be trusted - and to be able to build upon the review - authors need to report what they did, why they did it, and what they found. Complete, transparent, and reproducible reporting is measured by 'reporting quality'. To assess perceptions and standards of reporting quality of systematic reviews and meta-analyses published in ecology and evolutionary biology, we surveyed 208 researchers with relevant experience (as authors, reviewers, or editors), and conducted detailed evaluations of 102 systematic review and meta-analysis papers published between 2010 and 2019. Reporting quality was far below optimal and approximately normally distributed. Measured reporting quality was lower than what the community perceived, particularly for the systematic review methods required to measure trustworthiness. The minority of assessed papers that referenced a guideline (~16%) showed substantially higher reporting quality than average, and surveyed researchers showed interest in using a reporting guideline to improve reporting quality. The leading guideline for improving reporting quality of systematic reviews is the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement. Here we unveil an extension of PRISMA to serve the meta-analysis community in ecology and evolutionary biology: PRISMA-EcoEvo (version 1.0). PRISMA-EcoEvo is a checklist of 27 main items that, when applicable, should be reported in systematic review and meta-analysis publications summarising primary research in ecology and evolutionary biology. In this explanation and elaboration document, we provide guidance for authors, reviewers, and editors, with explanations for each item on the checklist, including supplementary examples from published papers. Authors can consult this PRISMA-EcoEvo guideline both in the planning and writing stages of a systematic review and meta-analysis, to increase reporting quality of submitted manuscripts. Reviewers and editors can use the checklist to assess reporting quality in the manuscripts they review. Overall, PRISMA-EcoEvo is a resource for the ecology and evolutionary biology community to facilitate transparent and comprehensively reported systematic reviews and meta-analyses.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/brv.12721DOI Listing
October 2021

Epidemiology and reporting characteristics of preclinical systematic reviews.

PLoS Biol 2021 05 5;19(5):e3001177. Epub 2021 May 5.

Clinical Epidemiology Program, Blueprint Translational Research Group, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada.

In an effort to better utilize published evidence obtained from animal experiments, systematic reviews of preclinical studies are increasingly more common-along with the methods and tools to appraise them (e.g., SYstematic Review Center for Laboratory animal Experimentation [SYRCLE's] risk of bias tool). We performed a cross-sectional study of a sample of recent preclinical systematic reviews (2015-2018) and examined a range of epidemiological characteristics and used a 46-item checklist to assess reporting details. We identified 442 reviews published across 43 countries in 23 different disease domains that used 26 animal species. Reporting of key details to ensure transparency and reproducibility was inconsistent across reviews and within article sections. Items were most completely reported in the title, introduction, and results sections of the reviews, while least reported in the methods and discussion sections. Less than half of reviews reported that a risk of bias assessment for internal and external validity was undertaken, and none reported methods for evaluating construct validity. Our results demonstrate that a considerable number of preclinical systematic reviews investigating diverse topics have been conducted; however, their quality of reporting is inconsistent. Our study provides the justification and evidence to inform the development of guidelines for conducting and reporting preclinical systematic reviews.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1371/journal.pbio.3001177DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8128274PMC
May 2021

Methods and results used in the development of a consensus-driven extension to the Consolidated Standards of Reporting Trials (CONSORT) statement for trials conducted using cohorts and routinely collected data (CONSORT-ROUTINE).

BMJ Open 2021 04 29;11(4):e049093. Epub 2021 Apr 29.

Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Québec, Canada

Objectives: Randomised controlled trials conducted using cohorts and routinely collected data, including registries, electronic health records and administrative databases, are increasingly used in healthcare intervention research. A Consolidated Standards of Reporting Trials (CONSORT) statement extension for trials conducted using cohorts and routinely collected data (CONSORT-ROUTINE) has been developed with the goal of improving reporting quality. This article describes the processes and methods used to develop the extension and decisions made to arrive at the final checklist.

Methods: The development process involved five stages: (1) identification of the need for a reporting guideline and project launch; (2) conduct of a scoping review to identify possible modifications to CONSORT 2010 checklist items and possible new extension items; (3) a three-round modified Delphi study involving key stakeholders to gather feedback on the checklist; (4) a consensus meeting to finalise items to be included in the extension, followed by stakeholder piloting of the checklist; and (5) publication, dissemination and implementation of the final checklist.

Results: 27 items were initially developed and rated in Delphi round 1, 13 items were rated in round 2 and 11 items were rated in round 3. Response rates for the Delphi study were 92 of 125 (74%) invited participants in round 1, 77 of 92 (84%) round 1 completers in round 2 and 62 of 77 (81%) round 2 completers in round 3. Twenty-seven members of the project team representing a variety of stakeholder groups attended the in-person consensus meeting. The final checklist includes five new items and eight modified items. The extension Explanation & Elaboration document further clarifies aspects that are important to report.

Conclusion: Uptake of CONSORT-ROUTINE and accompanying Explanation & Elaboration document will improve conduct of trials, as well as the transparency and completeness of reporting of trials conducted using cohorts and routinely collected data.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1136/bmjopen-2021-049093DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8094349PMC
April 2021

Intent to share Annals of Internal Medicine's trial data was not associated with data re-use.

J Clin Epidemiol 2021 Sep 26;137:241-249. Epub 2021 Apr 26.

Univ Rennes, CHU Rennes, Inserm, CIC 1414 [(Centre d'Investigation Clinique de Rennes)], F-35000 Rennes, France.

Objective: To explore the impact of the Annals of Internal Medicine (AIM) data-sharing policy for randomized controlled trials (RCTs) in terms of output from data-sharing (i.e. publications re-using the data).

Study Design And Setting: Retrospective study. RCTs published in the AIM between 2007 and 2017 were retrieved on PubMed. Publications where the data had been re-used were identified on Web of Science. Searches were performed by two independent reviewers. The primary outcome was any published re-use of the data (re-analysis, secondary analysis, or meta-analysis of individual participant data [MIPD]), where the first, last and corresponding authors were not among the authors of the RCT. Analyses used Cox (primary analysis) models adjusting for RCTs characteristics (registration: https://osf.io/8pj5e/).

Results: 185 RCTs were identified. 106 (57%) mentioned willingness to share data and 79 (43%) did not. 208 secondary analyses, 67 MIPD and no re-analyses were identified. No significant association was found between intent to share and re-use where the first, last and corresponding authors were not among the authors of the primary RCT (adjusted hazard ratio = 1.04 [0.47-2.30]).

Conclusion: Over ten years, RCTs published in AIM expressing an intention to share data were not associated with more extensive re-use of the data.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jclinepi.2021.04.011DOI Listing
September 2021

Incorporating equity, diversity, and inclusiveness into the Hong Kong Principles.

PLoS Biol 2021 04 27;19(4):e3001140. Epub 2021 Apr 27.

Berlin Institute of Health, QUEST Center for Transforming Biomedical Research, Berlin, Germany.

In this response to Labib and Evans, authors of the Hong Kong Principles look forward to collaborating with those from the broad research integrity community to ensure that issues of equity, diversity and inclusion will become part of the ecosystem of research integrity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1371/journal.pbio.3001140DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8078783PMC
April 2021

COVID-19 and the research scholarship ecosystem: help!

Authors:
David Moher

J Clin Epidemiol 2021 09 21;137:133-136. Epub 2021 Apr 21.

Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute; School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. Electronic address:

Objectives: Data sharing practices remain elusive in biomedicine. The COVID-19 pandemic has highlighted the problems associated with the lack of data sharing. The objective of this article is to draw attention to the problem and possible ways to address it.

Study Design And Setting: This article examines some of the current open access and data sharing practices at biomedical journals and funders. In the context of COVID-19 the consequences of these practices is also examined.

Results: Despite the best of intentions on the part of funders and journals, COVID-19 biomedical research is not open. Academic institutions need to incentivize and reward data sharing practices as part of researcher assessment. Journals and funders need to implement strong polices to ensure that data sharing becomes a reality. Patients support sharing of their data.

Conclusion: Biomedical journals, funders and academic institutions should act to require stronger adherence to data sharing policies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jclinepi.2021.03.032DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8455105PMC
September 2021
-->