Publications by authors named "Michael G Kenward"

66 Publications

Sensitivity analysis for clinical trials with missing continuous outcome data using controlled multiple imputation: A practical guide.

Stat Med 2020 09 17;39(21):2815-2842. Epub 2020 May 17.

MRC Clinical Trials Unit at UCL, UCL, London, UK.

Missing data due to loss to follow-up or intercurrent events are unintended, but unfortunately inevitable in clinical trials. Since the true values of missing data are never known, it is necessary to assess the impact of untestable and unavoidable assumptions about any unobserved data in sensitivity analysis. This tutorial provides an overview of controlled multiple imputation (MI) techniques and a practical guide to their use for sensitivity analysis of trials with missing continuous outcome data. These include δ- and reference-based MI procedures. In δ-based imputation, an offset term, δ, is typically added to the expected value of the missing data to assess the impact of unobserved participants having a worse or better response than those observed. Reference-based imputation draws imputed values with some reference to observed data in other groups of the trial, typically in other treatment arms. We illustrate the accessibility of these methods using data from a pediatric eczema trial and a chronic headache trial and provide Stata code to facilitate adoption. We discuss issues surrounding the choice of δ in δ-based sensitivity analysis. We also review the debate on variance estimation within reference-based analysis and justify the use of Rubin's variance estimator in this setting, since as we further elaborate on within, it provides information anchored inference.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/sim.8569DOI Listing
September 2020

Estimating treatment effects under untestable assumptions with nonignorable missing data.

Stat Med 2020 May 14;39(11):1658-1674. Epub 2020 Feb 14.

Department of Medical Statistics, LSHTM, London, UK.

Nonignorable missing data poses key challenges for estimating treatment effects because the substantive model may not be identifiable without imposing further assumptions. For example, the Heckman selection model has been widely used for handling nonignorable missing data but requires the study to make correct assumptions, both about the joint distribution of the missingness and outcome and that there is a valid exclusion restriction. Recent studies have revisited how alternative selection model approaches, for example estimated by multiple imputation (MI) and maximum likelihood, relate to Heckman-type approaches in addressing the first hurdle. However, the extent to which these different selection models rely on the exclusion restriction assumption with nonignorable missing data is unclear. Motivated by an interventional study (REFLUX) with nonignorable missing outcome data in half of the sample, this article critically examines the role of the exclusion restriction in Heckman, MI, and full-likelihood selection models when addressing nonignorability. We explore the implications of the different methodological choices concerning the exclusion restriction for relative bias and root-mean-squared error in estimating treatment effects. We find that the relative performance of the methods differs in practically important ways according to the relevance and strength of the exclusion restriction. The full-likelihood approach is less sensitive to alternative assumptions about the exclusion restriction than Heckman-type models and appears an appropriate method for handling nonignorable missing data. We illustrate the implications of method choice for inference in the REFLUX study, which evaluates the effect of laparoscopic surgery on long-term quality of life for patients with gastro-oseophageal reflux disease.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/sim.8504DOI Listing
May 2020

Reference-based sensitivity analysis for time-to-event data.

Pharm Stat 2019 11 15;18(6):645-658. Epub 2019 Jul 15.

Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK.

The analysis of time-to-event data typically makes the censoring at random assumption, ie, that-conditional on covariates in the model-the distribution of event times is the same, whether they are observed or unobserved (ie, right censored). When patients who remain in follow-up stay on their assigned treatment, then analysis under this assumption broadly addresses the de jure, or "while on treatment strategy" estimand. In such cases, we may well wish to explore the robustness of our inference to more pragmatic, de facto or "treatment policy strategy," assumptions about the behaviour of patients post-censoring. This is particularly the case when censoring occurs because patients change, or revert, to the usual (ie, reference) standard of care. Recent work has shown how such questions can be addressed for trials with continuous outcome data and longitudinal follow-up, using reference-based multiple imputation. For example, patients in the active arm may have their missing data imputed assuming they reverted to the control (ie, reference) intervention on withdrawal. Reference-based imputation has two advantages: (a) it avoids the user specifying numerous parameters describing the distribution of patients' postwithdrawal data and (b) it is, to a good approximation, information anchored, so that the proportion of information lost due to missing data under the primary analysis is held constant across the sensitivity analyses. In this article, we build on recent work in the survival context, proposing a class of reference-based assumptions appropriate for time-to-event data. We report a simulation study exploring the extent to which the multiple imputation estimator (using Rubin's variance formula) is information anchored in this setting and then illustrate the approach by reanalysing data from a randomized trial, which compared medical therapy with angioplasty for patients presenting with angina.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/pst.1954DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6899641PMC
November 2019

Information-anchored sensitivity analysis: theory and application.

J R Stat Soc Ser A Stat Soc 2019 Feb 16;182(2):623-645. Epub 2018 Nov 16.

Ashkirk UK.

Analysis of longitudinal randomized clinical trials is frequently complicated because patients deviate from the protocol. Where such deviations are relevant for the estimand, we are typically required to make an untestable assumption about post-deviation behaviour to perform our primary analysis and to estimate the treatment effect. In such settings, it is now widely recognized that we should follow this with sensitivity analyses to explore the robustness of our inferences to alternative assumptions about post-deviation behaviour. Although there has been much work on how to conduct such sensitivity analyses, little attention has been given to the appropriate loss of information due to missing data within sensitivity analysis. We argue that more attention needs to be given to this issue, showing that it is quite possible for sensitivity analysis to decrease and increase the information about the treatment effect. To address this critical issue, we introduce the concept of sensitivity analysis. By this we mean sensitivity analyses in which the proportion of information about the treatment estimate lost because of missing data is the same as the proportion of information about the treatment estimate lost because of missing data in the primary analysis. We argue that this forms a transparent, practical starting point for interpretation of sensitivity analysis. We then derive results showing that, for longitudinal continuous data, a broad class of controlled and reference-based sensitivity analyses performed by multiple imputation are information anchored. We illustrate the theory with simulations and an analysis of a peer review trial and then discuss our work in the context of other recent work in this area. Our results give a theoretical basis for the use of controlled multiple-imputation procedures for sensitivity analysis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/rssa.12423DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6378615PMC
February 2019

Exploratory study of the impact of perceived reward on habit formation.

BMC Psychol 2018 Dec 20;6(1):62. Epub 2018 Dec 20.

Department of Disease Control, London School of Hygiene and Tropical Medicine, Keppel Street, London, WC1E 7HT, UK.

Background: Habits (learned automatic responses to contextual cues) are considered important in sustaining health behaviour change. While habit formation is promoted by repeating behaviour in a stable context, little is known about what other variables may contribute, and whether there are variables which may accelerate the habit formation process. The aim of this study was to explore variables relating to the perceived reward value of behaviour - pleasure, perceived utility, perceived benefits, and intrinsic motivation. The paper tests whether reward has an impact on habit formation which is mediated by behavioural repetition, and whether reward moderates the relationship between repetition and habit formation.

Methods: Habit formation for flossing and vitamin C tablet adherence was investigated in the general public following an intervention, using a longitudinal, single-group design. Of a total sample of 118 participants, 80 received an online vitamin C intervention at baseline, and all 118 received a face-to-face flossing intervention four weeks later. Behaviour, habit, intention, context stability (whether the behaviour was conducted in the same place and point in routine every time), and reward variables were self-reported every four weeks, for sixteen weeks. Structured equation modelling was used to model reward-related variables as predictors of intention, repetition, and habit, and as moderators of the repetition-habit relationship.

Results: Habit strength and behaviour increased for both target behaviours. Intrinsic motivation and pleasure moderated the relationship between behavioural repetition and habit. Neither perceived utility nor perceived benefits predicted behaviour nor interacted with repetition. Limited support was obtained for the mediation hypothesis. Strong intentions unexpectedly weakened the repetition-habit relationship. Context stability mediated and for vitamin C, also moderated the repetition-habit relationship.

Conclusions: Pleasure and intrinsic motivation can aid habit formation through promoting greater increase in habit strength per behaviour repetition. Perceived reward can therefore reinforce habits, beyond the impact of reward upon repetition. Habit-formation interventions may be most successful where target behaviours are pleasurable or intrinsically valued.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s40359-018-0270-zDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6302524PMC
December 2018

Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

Stat Med 2018 04 18;37(9):1419-1438. Epub 2018 Jan 18.

London Hub for Trials Methodology Research, MRC Clinical Trials Unit at UCL, London, UK.

Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/sim.7589DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5901423PMC
April 2018

Estimation of the linear mixed integrated Ornstein-Uhlenbeck model.

J Stat Comput Simul 2017 May 12;87(8):1541-1558. Epub 2017 Jan 12.

School of Social and Community Medicine, University of Bristol, Bristol, UK.

The linear mixed model with an added integrated Ornstein-Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/00949655.2016.1277425DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5407356PMC
May 2017

A penalized framework for distributed lag non-linear models.

Biometrics 2017 09 30;73(3):938-948. Epub 2017 Jan 30.

Department of Medical Statistics, London School of Hygiene & Tropical Medicine, Keppel Street, London WC1E 7HT, UK.

Distributed lag non-linear models (DLNMs) are a modelling tool for describing potentially non-linear and delayed dependencies. Here, we illustrate an extension of the DLNM framework through the use of penalized splines within generalized additive models (GAM). This extension offers built-in model selection procedures and the possibility of accommodating assumptions on the shape of the lag structure through specific penalties. In addition, this framework includes, as special cases, simpler models previously proposed for linear relationships (DLMs). Alternative versions of penalized DLNMs are compared with each other and with the standard unpenalized version in a simulation study. Results show that this penalized extension to the DLNM class provides greater flexibility and improved inferential properties. The framework exploits recent theoretical developments of GAMs and is implemented using efficient routines within freely available software. Real-data applications are illustrated through two reproducible examples in time series and survival analysis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/biom.12645DOI Listing
September 2017

Comparisons between mild and severe cases of hand, foot and mouth disease in temporal trends: a comparative time series study from mainland China.

BMC Public Health 2016 10 21;16(1):1109. Epub 2016 Oct 21.

Department of Epidemiology and Biostatistics, West China School of Public Health, Sichuan University, Chengdu, China.

Background: Over recent decades, hand, foot and mouth disease (HFMD) has emerged as a serious public health threat in the Asia-Pacific region because of its high rates of severe complications. Understanding the differences and similarities between mild and severe cases can be helpful in the control of HFMD. In this study, we compared the two types of HFMD cases in their temporal trends.

Methods: We retrieved the daily series of disease counts of mild and severe HFMD cases reported in mainland China in the period of 2009-2014. We applied a quasi-Poisson regression model to decompose each series into the long-term linear trend, periodic variations, and short-term fluctuations, and then we compared each component between two series separately.

Results: A total of 11,101,860 clinical HFMD cases together with 115,596 severe cases were included into this analysis. We found a biennial increase of 24.46 % (95 % CI: 22.80-26.14 %) for the baseline of disease incidence of mild cases, whereas a biennial decrease of 8.80 % (95 % CI: 7.26-10.31 %) was seen for that of severe cases. The periodic variations of both two series could be characterized by a mixture of biennial, annual, semi-annual and eight-monthly cycles. However, compared to the mild cases, we found the severe cases vary more widely for the biennial and annual cycle, and started its annual epidemic earlier. We also found the short-term fluctuations between two series were still significantly correlated at the current day with a correlation coefficient of 0.46 (95 % CI: 0.43-0.49).

Conclusions: We found some noticeable differences and also similarities between the daily series of mild and severe HFMD cases at different time scales. Our findings can help us to deepen the understanding of the transmission of different types of HFMD cases, and also provide evidences for the planning of the associated disease control strategies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s12889-016-3762-xDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5073464PMC
October 2016

Clustering of contacts relevant to the spread of infectious disease.

Epidemics 2016 12 26;17:1-9. Epub 2016 Aug 26.

Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, Keppel Street, London WC1E 7HT, United Kingdom; Modelling and Economics Unit, Public Health England, 61 Colindale Avenue, London NW9 5EQ, United Kingdom. Electronic address:

Objective: Infectious disease spread depends on contact rates between infectious and susceptible individuals. Transmission models are commonly informed using empirically collected contact data, but the relevance of different contact types to transmission is still not well understood. Some studies select contacts based on a single characteristic such as proximity (physical/non-physical), location, duration or frequency. This study aimed to explore whether clusters of contacts similar to each other across multiple characteristics could better explain disease transmission.

Methods: Individual contact data from the POLYMOD survey in Poland, Great Britain, Belgium, Finland and Italy were grouped into clusters by the k medoids clustering algorithm with a Manhattan distance metric to stratify contacts using all four characteristics. Contact clusters were then used to fit a transmission model to sero-epidemiological data for varicella-zoster virus (VZV) in each country.

Results And Discussion: Across the five countries, 9-15 clusters were found to optimise both quality of clustering (measured using average silhouette width) and quality of fit (measured using several information criteria). Of these, 2-3 clusters were most relevant to VZV transmission, characterised by (i) 1-2 clusters of age-assortative contacts in schools, (ii) a cluster of less age-assortative contacts in non-school settings. Quality of fit was similar to using contacts stratified by a single characteristic, providing validation that single stratifications are appropriate. However, using clustering to stratify contacts using multiple characteristics provided insight into the structures underlying infection transmission, particularly the role of age-assortative contacts, involving school age children, for VZV transmission between households.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.epidem.2016.08.001DOI Listing
December 2016

Properties of Estimators in Exponential Family Settings with Observation-based Stopping Rules.

J Biom Biostat 2016 Feb 25;7(1). Epub 2016 Jan 25.

Department of Statistics, North Carolina State University, Raleigh NC, USA.

Often, sample size is not fixed by design. A key example is a sequential trial with a stopping rule, where stopping is based on what has been observed at an interim look. While such designs are used for time and cost efficiency, and hypothesis testing theory has been well developed, estimation following a sequential trial is a challenging, still controversial problem. Progress has been made in the literature, predominantly for normal outcomes and/or for a deterministic stopping rule. Here, we place these settings in a broader context of outcomes following an exponential family distribution and, with a stochastic stopping rule that includes a deterministic rule and completely random sample size as special cases. It is shown that the estimation problem is usually simpler than often thought. In particular, it is established that the ordinary sample average is a very sensible choice, contrary to commonly encountered statements. We study (1) The so-called incompleteness property of the sufficient statistics, (2) A general class of linear estimators, and (3) Joint and conditional likelihood estimation. Apart from the general exponential family setting, normal and binary outcomes are considered as key examples. While our results hold for a general number of looks, for ease of exposition, we focus on the simple yet generic setting of two possible sample sizes, N=n or N=2n.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.4172/2155-6180.1000272DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4861245PMC
February 2016

Reference-based sensitivity analysis via multiple imputation for longitudinal trials with protocol deviation.

Stata J 2016 Apr;16(2):443-463

MRC Clinical Trials Unit at UCL, London School of Hygiene and Tropical Medicine, London, UK.

Randomized controlled trials provide essential evidence for the evaluation of new and existing medical treatments. Unfortunately, the statistical analysis is often complicated by the occurrence of protocol deviations, which mean we cannot always measure the intended outcomes for individuals who deviate, resulting in a missing-data problem. In such settings, however one approaches the analysis, an untestable assumption about the distribution of the unobserved data must be made. To understand how far the results depend on these assumptions, the primary analysis should be supplemented by a range of sensitivity analyses, which explore how the conclusions vary over a range of different credible assumptions for the missing data. In this article, we describe a new command, mimix, that can be used to perform reference-based sensitivity analyses for randomized controlled trials with longitudinal quantitative outcome data, using the approach proposed by Carpenter, Roger, and Kenward (2013, 23: 1352-1371). Under this approach, we make qualitative assumptions about how individuals' missing outcomes relate to those observed in relevant groups in the trial, based on plausible clinical scenarios. Statistical analysis then proceeds using the method of multiple imputation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5796638PMC
April 2016

Autonomy dimensions and care seeking for delivery in Zambia; the prevailing importance of cluster-level measurement.

Sci Rep 2016 Mar 2;6:22578. Epub 2016 Mar 2.

London School of Hygiene &Tropical Medicine, Faculty of Epidemiology and Population Health.

It is widely held that decisions whether or when to attend health facilities for childbirth are not only influenced by risk awareness and household wealth, but also by factors such as autonomy or a woman's ability to act upon her own preferences. How autonomy should be constructed and measured - namely, as an individual or cluster-level variable - has been less examined. We drew on household survey data from Zambia to study the effect of several autonomy dimensions (financial, relationship, freedom of movement, health care seeking and violence) on place of delivery for 3200 births across 203 rural clusters (villages). In multilevel logistic regression, two autonomy dimensions (relationship and health care seeking) were strongly associated with facility delivery when measured at the cluster level (OR 1.27 and 1.57, respectively), though not at the individual level. This suggests that power relations and gender norms at the community level may override an individual woman's autonomy, and cluster-level measurement may prove critical to understanding the interplay between autonomy and care seeking in this and similar contexts.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/srep22578DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4773858PMC
March 2016

Can Internet-Based Sexual Health Services Increase Diagnoses of Sexually Transmitted Infections (STI)? Protocol for a Randomized Evaluation of an Internet-Based STI Testing and Results Service.

JMIR Res Protoc 2016 Jan 15;5(1):e9. Epub 2016 Jan 15.

Department of Population Health, Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, United Kingdom.

Background: Ensuring rapid access to high quality sexual health services is a key public health objective, both in the United Kingdom and internationally. Internet-based testing services for sexually transmitted infections (STIs) are considered to be a promising way to achieve this goal. This study will evaluate a nascent online STI testing and results service in South East London, delivered alongside standard face-to-face STI testing services.

Objective: The aim of this study is to establish whether an online testing and results services can (1) increase diagnoses of STIs and (2) increase uptake of STI testing, when delivered alongside standard face-to-face STI testing services.

Methods: This is a single-blind randomized controlled trial. We will recruit 3000 participants who meet the following eligibility criteria: 16-30 years of age, resident in the London boroughs of Lambeth and Southwark, having at least one sexual partner in the last 12 months, having access to the Internet and willing to take an STI test. People unable to provide informed consent and unable to read and understand English (the websites will be in English) will be excluded. Baseline data will be collected at enrolment. This includes participant contact details, demographic data (date of birth, gender, ethnicity, and sexual orientation), and sexual health behaviors (last STI test, service used at last STI test and number of sexual partners in the last 12 months). Once enrolled, participants will be randomly allocated either (1) to an online STI testing and results service (Sexual Health 24) offering postal self-administered STI kits for chlamydia, gonorrhoea, syphilis, and HIV; results via text message (short message service, SMS), except positive results for HIV, which will be delivered by phone; and direct referrals to local clinics for treatment or (2) to a conventional sexual health information website with signposting to local clinic-based sexual health services. Participants will be free to use any other interventions or services during the trial period. At 6 weeks from randomization we will collect self-reported follow-up data on service use, STI tests and results, treatment prescribed, and acceptability of STI testing services. We will also collect objective data from participating STI testing services on uptake of STI testing, STI diagnoses and treatment. We hypothesise that uptake of STI testing and STI diagnoses will be higher in the intervention arm. Our hypothesis is based on the assumption that the intervention is less time-consuming, more convenient, more private, and incur less stigma and embarrassment than face-to-face STI testing pathways. The primary outcome measure is diagnosis of any STI at 6 weeks from randomization and our co-primary outcome is completion of any STI test at 6 weeks from randomization. We define completion of a test, as samples returned, processed, and results delivered to the intervention and/or clinic settings. We will use risk ratios to calculate the effect of the intervention on our primary outcomes with 95% confidence intervals. All analyses will be based on the intention-to-treat (ITT) principle.

Results: This study is funded by Guy's and St Thomas' Charity and it has received ethical approval from NRES Committee London-Camberwell St Giles (Ref 14/LO/1477). Research and Development approval has been obtained from Kings College Hospital NHS Foundation Trust and Guy's and St Thomas' NHS Foundation Trust. Results are expected in June 2016.

Conclusions: This study will provide evidence on the effectiveness of an online STI testing and results service in South East London. Our findings may also be generalizable to similar populations in the United Kingdom.

Trial Registration: International Standard Randomized Controlled Trial Number (ISRCTN): 13354298; http://www.isrctn.com/ISRCTN13354298 (Archived by WebCite at http://www.webcitation.org/6d9xT2bPj).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2196/resprot.4094DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4733221PMC
January 2016

Estimation After a Group Sequential Trial.

Stat Biosci 2015 Oct 22;7(2):187-205. Epub 2014 Feb 22.

I-BioStat, Katholieke Universiteit Leuven, B-3000 Leuven, Belgium ; I-BioStat, Universiteit Hasselt, B-3590 Diepenbeek, Belgium.

Group sequential trials are one important instance of studies for which the sample size is not fixed but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs (2012) and Milanzi (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, = or = 2. In this paper, we consider the more practically useful setting of sample sizes in a the finite set {, , …, }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s12561-014-9112-6DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4603757PMC
October 2015

Estimating Disease Duration in Cross-sectional Surveys.

Epidemiology 2015 Nov;26(6):839-45

From the aFaculty of Infectious and Tropical Diseases, Department of Disease Control, London School of Hygiene and Tropical Medicine, London, United Kingdom; and bFaculty of Epidemiology and Population Health, Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, United Kingdom.

Background: In some common episodic conditions, such as diarrhea, respiratory infections, or fever, episode duration can reflect disease severity. The mean episode duration in a population can be estimated if both the incidence and prevalence of the condition are known. In this article, we discuss how an estimator of the average episode duration may be obtained based on prevalence alone if data are collected for two consecutive units of time (usually days) in the same person.

Methods: We derive a maximum likelihood estimator of episode duration, explore its behavior through a simulation study, and illustrate its use through a real example.

Results: We show that for two consecutive days, the estimator of the mean episode duration in a population equals one plus twice the ratio of the number of subjects with the condition on both days to the number of subjects with only 1 day ill. The estimator can be extended to account for 3 or 4 consecutive days. The estimator assumes nonoverlapping episodes and a time-constant incidence rate and is more precise for shorter than for longer average episode durations.

Conclusion: The proposed method allows estimating the mean duration of disease episodes in cross-sectional studies and is applicable to large demographic and health surveys in low-income settings that routinely collect data on diarrhea and respiratory illness. The method may further be used for the calculation of the duration of infectiousness if test results are available for two consecutive days, such as paired throat swabs for influenza.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/EDE.0000000000000364DOI Listing
November 2015

James Roger: A brief biography.

Stat Methods Med Res 2015 Aug;24(4):399-402

Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/0962280214520734DOI Listing
August 2015

Is ethnic density associated with risk of child pedestrian injury? A comparison of inter-census changes in ethnic populations and injury rates.

Ethn Health 2016 12;21(1):1-19. Epub 2014 Dec 12.

d Department of Population Health , London School of Hygiene and Tropical Medicine , London , UK.

Objectives: Research on inequalities in child pedestrian injury risk has identified some puzzling trends: although, in general, living in more affluent areas protects children from injury, this is not true for those in some minority ethnic groups. This study aimed to identify whether 'group density' effects are associated with injury risk, and whether taking these into account alters the relationship between area deprivation and injury risk. 'Group density' effects exist when ethnic minorities living in an area with a higher proportion of people from a similar ethnic group enjoy better health than those who live in areas with a lower proportion, even though areas with dense minority ethnic populations can be relatively more materially disadvantaged.

Design: This study utilised variation in minority ethnic densities in London between two census periods to identify any associations between group density and injury risk. Using police data on road traffic injury and population census data from 2001 to 2011, the numbers of 'White,' 'Asian' and 'Black' child pedestrian injuries in an area were modelled as a function of the percentage of the population in that area that are 'White,' 'Asian' and 'Black,' controlling for socio-economic disadvantage and characteristics of the road environment.

Results: There was strong evidence (p < 0.001) of a negative association between 'Black' population density and 'Black' child pedestrian injury risk [incidence (of injury) rate ratios (IRR) 0.575, 95% CI 0.515-0.642]. There was weak evidence (p = 0.083) of a negative association between 'Asian' density and 'Asian' child pedestrian injury risk (IRR 0.901, 95% CI 0.801-1.014) and no evidence (p = 0.412) of an association between 'White' density and 'White' child pedestrian injury risk (IRR 1.075, 95% CI 0.904-1.279). When group density effects are taken into account, area deprivation is associated with injury risk of all ethnic groups.

Conclusions: Group density appears to protect 'Black' children living in London against pedestrian injury risk. These findings suggest that future research should focus on structural properties of societies to explain the relationships between minority ethnicity and risk.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/13557858.2014.985637DOI Listing
July 2016

The impact of alcohol consumption on patterns of union formation in Russia 1998-2010: an assessment using longitudinal data.

Popul Stud (Camb) 2014 ;68(3):283-303

a London School of Hygiene & Tropical Medicine.

Using data from the Russian Longitudinal Monitoring Survey, 1998-2010, we investigated the extent to which patterns of alcohol consumption in Russia are associated with the subsequent likelihood of entry into cohabitation and marriage. Using discrete-time event history analysis we estimated for 16-50 year olds the extent to which the probabilities of entry into the two types of union were affected by the amount of alcohol drunk and the pattern of drinking, adjusted to allow for social and demographic factors including income, employment, and health. The results show that individuals who did not drink alcohol were less likely to embark on either cohabitation or marriage, that frequent consumption of alcohol was associated with a greater chance of entering unmarried cohabitation than of entering into a marriage, and that heavy drinkers were less likely to convert their relationship from cohabitation to marriage.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/00324728.2014.955045DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4487543PMC
September 2016

A characterization of missingness at random in a generalized shared-parameter joint modeling framework for longitudinal and time-to-event data, and sensitivity analysis.

Biom J 2014 Nov 20;56(6):1001-15. Epub 2014 Jun 20.

I-BioStat, Universiteit Hasselt, B-3590, Diepenbeek, Belgium.

We consider a conceptual correspondence between the missing data setting, and joint modeling of longitudinal and time-to-event outcomes. Based on this, we formulate an extended shared random effects joint model. Based on this, we provide a characterization of missing at random, which is in line with that in the missing data setting. The ideas are illustrated using data from a study on liver cirrhosis, contrasting the new framework with conventional joint models.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/bimj.201300028DOI Listing
November 2014

Missing data sensitivity analysis for recurrent event data using controlled imputation.

Pharm Stat 2014 Jul-Aug;13(4):258-64. Epub 2014 Jun 16.

GlaxoSmithKline Research and Development, Middlesex, UK.

Statistical analyses of recurrent event data have typically been based on the missing at random assumption. One implication of this is that, if data are collected only when patients are on their randomized treatment, the resulting de jure estimator of treatment effect corresponds to the situation in which the patients adhere to this regime throughout the study. For confirmatory analysis of clinical trials, sensitivity analyses are required to investigate alternative de facto estimands that depart from this assumption. Recent publications have described the use of multiple imputation methods based on pattern mixture models for continuous outcomes, where imputation for the missing data for one treatment arm (e.g. the active arm) is based on the statistical behaviour of outcomes in another arm (e.g. the placebo arm). This has been referred to as controlled imputation or reference-based imputation. In this paper, we use the negative multinomial distribution to apply this approach to analyses of recurrent events and other similar outcomes. The methods are illustrated by a trial in severe asthma where the primary endpoint was rate of exacerbations and the primary analysis was based on the negative binomial model.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/pst.1624DOI Listing
April 2015

Are missing data adequately handled in cluster randomised trials? A systematic review and guidelines.

Clin Trials 2014 Oct 5;11(5):590-600. Epub 2014 Jun 5.

Centre for Primary Care and Public Health, Queen Mary University of London, London, UK.

Background: Missing data are a potential source of bias, and their handling in the statistical analysis can have an important impact on both the likelihood and degree of such bias. Inadequate handling of the missing data may also result in invalid variance estimation. The handling of missing values is more complex in cluster randomised trials, but there are no reviews of practice in this field.

Objectives: A systematic review of published trials was conducted to examine how missing data are reported and handled in cluster randomised trials.

Methods: We systematically identified cluster randomised trials, published in English in 2011, using the National Library of Medicine (MEDLINE) via PubMed. Non-randomised and pilot/feasibility trials were excluded, as were reports of secondary analyses, interim analyses, and economic evaluations and those where no data were at the individual level. We extracted information on missing data and the statistical methods used to deal with them from a random sample of the identified studies.

Results: We included 132 trials. There was evidence of missing data in 95 (72%). Only 32 trials reported handling missing data, 22 of them using a variety of single imputation techniques, 8 using multiple imputation without accommodating the clustering and 2 stating that their likelihood-based complete case analysis accounted for missing values because the data were assumed Missing-at-Random.

Limitations: The results presented in this study are based on a large random sample of cluster randomised trials published in 2011, identified in electronic searches and therefore possibly missing some trials, most likely of poorer quality. Also, our results are based on information in the main publication for each trial. These reports may omit some important information on the presence of, and reasons for, missing data and on the statistical methods used to handle them. Our extraction methods, based on published reports, could not distinguish between missing data in outcomes and missing data in covariates. This distinction may be important in determining the assumptions about the missing data mechanism necessary for complete case analyses to be valid.

Conclusions: Missing data are present in the majority of cluster randomised trials. However, they are poorly reported, and most authors give little consideration to the assumptions under which their analysis will be valid. The majority of the methods currently used are valid under very strong assumptions about the missing data, whose plausibility is rarely discussed in the corresponding reports. This may have important consequences for the validity of inferences in some trials. Methods which result in valid inferences under general Missing-at-Random assumptions are available and should be made more accessible.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/1740774514537136DOI Listing
October 2014

Women's risk of repeat abortions is strongly associated with alcohol consumption: a longitudinal analysis of a Russian national panel study, 1994-2009.

PLoS One 2014 26;9(3):e90356. Epub 2014 Mar 26.

Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, United Kingdom.

Abortion rates in Russia, particularly repeat abortions, are among the highest in the world, and abortion complications make a substantial contribution to the country's high maternal mortality rate. Russia also has a very high rate of hazardous alcohol use. However, the association between alcohol use and abortion in Russia remains unexplored. We investigated the longitudinal predictors of first and repeat abortion, focussing on women's alcohol use as a risk factor. Follow-up data from 2,623 women of reproductive age (16-44 years) was extracted from 14 waves of the Russian Longitudinal Monitoring Survey (RLMS), a nationally representative panel study covering the period 1994-2009. We used discrete time hazard models to estimate the probability of having a first and repeat abortion by social, demographic and health characteristics at the preceding study wave. Having a first abortion was associated with demographic factors such as age and parity, whereas repeat abortions were associated with low education and alcohol use. After adjustment for demographic and socioeconomic factors, the risk of having a repeat abortion increased significantly as women's drinking frequency increased (P<0.001), and binge drinking women were significantly more likely to have a repeat abortion than non-drinkers (OR 2.28, 95% CI 1.62-3.20). This association was not accounted for by contraceptive use or a higher risk of pregnancy. Therefore the determinants of first and repeat abortion in Russia between 1994-2009 were different. Women who had repeat abortions were distinguished by their heavier and more frequent alcohol use. The mechanism for the association is not well understood but could be explained by unmeasured personality factors, such as risk taking, or social non-conformity increasing the risk of unplanned pregnancy. Heavy or frequent drinkers constitute a particularly high risk group for repeat abortion, who could be targeted in prevention efforts.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0090356PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3966730PMC
December 2015

Maternal dietary fatty acid intake during pregnancy and the risk of preclinical and clinical type 1 diabetes in the offspring.

Br J Nutr 2014 Mar;111(5):895-903

Nutrition Unit, Department of Lifestyle and Participation, National Institute for Health and Welfare, PO Box 30, FI-00271 Helsinki, Finland.

The aim of the present study was to examine the associations between the maternal intake of fatty acids during pregnancy and the risk of preclinical and clinical type 1 diabetes in the offspring. The study included 4887 children with human leucocyte antigen (HLA)-conferred type 1 diabetes susceptibility born during the years 1997-2004 from the Finnish Type 1 Diabetes Prediction and Prevention Study. Maternal diet was assessed with a validated FFQ. The offspring were observed at 3- to 12-month intervals for the appearance of type 1 diabetes-associated autoantibodies and development of clinical type 1 diabetes (average follow-up period: 4·6 years (range 0·5-11·5 years)). Altogether, 240 children developed preclinical type 1 diabetes and 112 children developed clinical type 1 diabetes. Piecewise linear log-hazard survival model and Cox proportional-hazards regression were used for statistical analyses. The maternal intake of palmitic acid (hazard ratio (HR) 0·82, 95 % CI 0·67, 0·99) and high consumption of cheese during pregnancy (highest quarter v. intermediate half HR 0·52, 95 % CI 0·31, 0·87) were associated with a decreased risk of clinical type 1 diabetes. The consumption of sour milk products (HR 1·14, 95 % CI 1·02, 1·28), intake of protein from sour milk (HR 1·15, 95 % CI 1·02, 1·29) and intake of fat from fresh milk (HR 1·43, 95 % CI 1·04, 1·96) were associated with an increased risk of preclinical type 1 diabetes, and the intake of low-fat margarines (HR 0·67, 95 % CI 0·49, 0·92) was associated with a decreased risk. No conclusive associations between maternal fatty acid intake or food consumption during pregnancy and the development of type 1 diabetes in the offspring were detected.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1017/S0007114513003073DOI Listing
March 2014

Using multi-level data to estimate the effect of social capital on hazardous alcohol consumption in the former Soviet Union.

Eur J Public Health 2014 Aug 27;24(4):572-7. Epub 2014 Jan 27.

1 Department of Health Services Research and Policy, European Centre on Health of Societies in Transition (ECOHOST), London School of Hygiene and Tropical Medicine, London, UK.

Background: Hazardous alcohol consumption is a leading cause of mortality in the former Soviet Union (fSU), but little is known about the social factors associated with this behaviour. We set out to estimate the association between individual- and community-level social capital and hazardous alcohol consumption in the fSU.

Methods: Data were obtained from Health in Times of Transition 2010, a household survey of nine fSU countries (n = 18 000 within 2027 communities). Individual-level indicators of social isolation, civic participation, help in a crisis and interpersonal trust were aggregated to the community level. Adjusting for demographic factors, the association of individual- and community-level indicators with problem drinking (CAGE) and episodic heavy drinking was estimated using a population average model for the analysis of multi-level data.

Results: Among men, individual social isolation [odds ratio (OR) = 1.20], community social isolation (OR = 1.18) and community civic participation (OR = 4.08) were associated with increased odds of CAGE. Community civic participation (OR = 2.91) increased the odds of episodic heavy drinking, while community interpersonal trust (OR = 0.89) decreased these odds. Among women, individual social isolation (OR = 1.30) and community civic participation (OR = 2.94) increased odds of CAGE.

Conclusion: Our results provide evidence of the role of some elements of social capital in problem drinking in the fSU, and highlight the importance of community effects. The nature of civic organizations in the fSU, and the communities in which civic participation is high, should be further investigated to inform alcohol policy in the region.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/eurpub/ckt213DOI Listing
August 2014

Analysis of longitudinal trials with protocol deviation: a framework for relevant, accessible assumptions, and inference via multiple imputation.

J Biopharm Stat 2013 ;23(6):1352-71

a Medical Statistics Department , London School of Hygiene & Tropical Medicine , London , UK.

Protocol deviations, for example, due to early withdrawal and noncompliance, are unavoidable in clinical trials. Such deviations often result in missing data. Additional assumptions are then needed for the analysis, and these cannot be definitively verified from the data at hand. Thus, as recognized by recent regulatory guidelines and reports, clarity about these assumptions and their implications is vital for both the primary analysis and framing relevant sensitivity analysis. This article focuses on clinical trials with longitudinal quantitative outcome data. For the target population, we define two estimands, the de jure estimand, "does the treatment work under the best case scenario," and the de facto estimand, "what would be the effect seen in practice." We then carefully define the concept of a deviation from the protocol relevant to the estimand, or for short a deviation. Each patient's postrandomization data can then be divided into predeviation data and postdeviation data. We set out an accessible framework for contextually appropriate assumptions relevant to de facto and de jure estimands, that is, assumptions about the joint distribution of pre- and postdeviation data relevant to the clinical question at hand. We then show how, under these assumptions, multiple imputation provides a practical approach to estimation and inference. We illustrate with data from a longitudinal clinical trial in patients with chronic asthma.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/10543406.2013.834911DOI Listing
June 2014

Multiple imputation methods for handling missing data in cost-effectiveness analyses that use data from hierarchical studies: an application to cluster randomized trials.

Med Decis Making 2013 11 1;33(8):1051-63. Epub 2013 Aug 1.

Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, UK (MGK)

Purpose: Multiple imputation (MI) has been proposed for handling missing data in cost-effectiveness analyses (CEAs). In CEAs that use cluster randomized trials (CRTs), the imputation model, like the analysis model, should recognize the hierarchical structure of the data. This paper contrasts a multilevel MI approach that recognizes clustering, with single-level MI and complete case analysis (CCA) in CEAs that use CRTs.

Methods: We consider a multilevel MI approach compatible with multilevel analytical models for CEAs that use CRTs. We took fully observed data from a CEA that evaluated an intervention to improve diagnosis of active labor in primiparous women using a CRT (2078 patients, 14 clusters). We generated scenarios with missing costs and outcomes that differed, for example, according to the proportion with missing data (10%-50%), the covariates that predicted missing data (individual, cluster-level), and the missingness mechanism: missing completely at random (MCAR), missing at random (MAR), or missing not at random (MNAR). We estimated incremental net benefits (INBs) for each approach and compared them with the estimates from the fully observed data, the "true" INBs.

Results: When costs and outcomes were assumed to be MCAR, the INBs for each approach were similar to the true estimates. When data were MAR, the point estimates from the CCA differed from the true estimates. Multilevel MI provided point estimates and standard errors closer to the true values than did single-level MI across all settings, including those in which a high proportion of observations had cost and outcome data MAR and when data were MNAR.

Conclusions: Multilevel MI accommodates the multilevel structure of the data in CEAs that use cluster trials and provides accurate cost-effectiveness estimates across the range of circumstances considered.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/0272989X13492203DOI Listing
November 2013

A flexible joint modeling framework for longitudinal and time-to-event data with overdispersion.

Stat Methods Med Res 2016 08 18;25(4):1661-76. Epub 2013 Jul 18.

Faculty of Medicine, Katholieke Universiteit Leuven, Leuven, Belgium.

We combine conjugate and normal random effects in a joint model for outcomes, at least one of which is non-Gaussian, with particular emphasis on cases in which one of the outcomes is of survival type. Conjugate random effects are used to relax the often-restrictive mean-variance prescription in the non-Gaussian outcome, while normal random effects account for not only the correlation induced by repeated measurements from the same subject but also the association between the different outcomes. Using a case study in chronic heart failure, we show that model fit can be improved, even resulting in impact on significance tests, by switching to our extended framework. By first taking advantage of the ease of analytical integration over conjugate random effects, we easily estimate our framework, by maximum likelihood, in standard software.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/0962280213495994DOI Listing
August 2016

Longitudinal prediction of divorce in Russia: the role of individual and couple drinking patterns.

Alcohol Alcohol 2013 Nov-Dec;48(6):737-42. Epub 2013 Jul 12.

Corresponding author:

Aims: The aim of the study was to explore associations between dimensions of alcohol use in married couples and subsequent divorce in Russia using longitudinal data.

Methods: Follow-up data on 7157 married couples were extracted from 14 consecutive annual rounds (1994-2010) of the Russian Longitudinal Monitoring Survey, a national population-based panel study. Discrete-time hazard models were fitted to estimate the probability of divorce among married couples by drinking patterns reported in the previous survey wave.

Results: In adjusted models, increased odds of divorce were associated with greater frequency of husband and wife drinking (test for trend P = 0.005, and P = 0.05, respectively), wife's binge drinking (P = 0.05) and husband's heavy vodka drinking (P = 0.005). Couples in whom the wife drank more frequently than the husband were more likely to divorce (OR 2.86, 95% CI 1.52-5.36), compared with other combinations of drinking. The association between drinking and divorce was stronger in regions outside Moscow or St. Petersburg.

Conclusion: This study adds to the sparse literature on the topic and suggests that in Russia heavy and frequent drinking of both husbands and wives put couples at greater risk of future divorce, with some variation by region and aspect of alcohol use.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/alcalc/agt068DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3799559PMC
May 2014

Comparative field evaluation of combinations of long-lasting insecticide treated nets and indoor residual spraying, relative to either method alone, for malaria prevention in an area where the main vector is Anopheles arabiensis.

Parasit Vectors 2013 Feb 22;6:46. Epub 2013 Feb 22.

Environmental Health and Ecological Sciences Thematic Group, Ifakara Health Institute, Ifakara, Tanzania.

Background: Long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) are commonly used together in the same households to improve malaria control despite inconsistent evidence on whether such combinations actually offer better protection than nets alone or IRS alone.

Methods: Comparative tests were conducted using experimental huts fitted with LLINs, untreated nets, IRS plus untreated nets, or combinations of LLINs and IRS, in an area where Anopheles arabiensis is the predominant malaria vector species. Three LLIN types, Olyset®, PermaNet 2.0® and Icon Life® nets and three IRS treatments, pirimiphos-methyl, DDT, and lambda cyhalothrin, were used singly or in combinations. We compared, number of mosquitoes entering huts, proportion and number killed, proportions prevented from blood-feeding, time when mosquitoes exited the huts, and proportions caught exiting. The tests were done for four months in dry season and another six months in wet season, each time using new intact nets.

Results: All the net types, used with or without IRS, prevented >99% of indoor mosquito bites. Adding PermaNet 2.0® and Icon Life®, but not Olyset® nets into huts with any IRS increased mortality of malaria vectors relative to IRS alone. However, of all IRS treatments, only pirimiphos-methyl significantly increased vector mortality relative to LLINs alone, though this increase was modest. Overall, median mortality of An. arabiensis caught in huts with any of the treatments did not exceed 29%. No treatment reduced entry of the vectors into huts, except for marginal reductions due to PermaNet 2.0® nets and DDT. More than 95% of all mosquitoes were caught in exit traps rather than inside huts.

Conclusions: Where the main malaria vector is An. arabiensis, adding IRS into houses with intact pyrethroid LLINs does not enhance house-hold level protection except where the IRS employs non-pyrethroid insecticides such as pirimiphos-methyl, which can confer modest enhancements. In contrast, adding intact bednets onto IRS enhances protection by preventing mosquito blood-feeding (even if the nets are non-insecticidal) and by slightly increasing mosquito mortality (in case of LLINs). The primary mode of action of intact LLINs against An. arabiensis is clearly bite prevention rather than insecticidal activity. Therefore, where resources are limited, priority should be to ensure that everyone at risk consistently uses LLINs and that the nets are regularly replaced before being excessively torn. Measures that maximize bite prevention (e.g. proper net sizes to effectively cover sleeping spaces, stronger net fibres that resist tears and burns and net use practices that preserve net longevity), should be emphasized.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/1756-3305-6-46DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3606331PMC
February 2013
-->