Publications by authors named "Iris Shimizu"

13 Publications

  • Page 1 of 1

National Center for Health Statistics Data Presentation Standards for Proportions.

Vital Health Stat 2 2017 Aug(175):1-22

The National Center for Health Statistics (NCHS) disseminates information on a broad range of health topics through diverse publications. These publications must rely on clear and transparent presentation standards that can be broadly and efficiently applied. Standards are particularly important for large, cross-cutting reports where estimates cannot be individually evaluated and indicators of precision cannot be included alongside the estimates. This report describes the NCHS Data Presentation Standards for Proportions. The multistep NCHS Data Presentation Standards for Proportions are based on a minimum denominator sample size and on the absolute and relative widths of a confidence interval calculated using the Clopper-Pearson method. Proportions (usually multiplied by 100 and expressed as percentages) are the most commonly reported estimates in NCHS reports.
View Article and Find Full Text PDF

Download full-text PDF

Source
August 2017

Nonresponse Bias in Estimates From the 2012 National Ambulatory Medical Care Survey.

Vital Health Stat 2 2016 Feb(171):1-42

Background: The National Ambulatory Medical Care Survey (NAMCS) is an annual, nationally representative sample survey of physicians and of visits to physicians. Two major changes were made to the 2012 NAMCS to support reliable state estimates. The sampling design changed from an area sample to a fivefold-larger list sample of physicians stratified by the nine U.S. Census Bureau divisions and 34 states. At the same time, the data collection mode changed from paper forms to laptop-assisted data collection and from physician or office staff abstraction of medical records to predominantly Census interviewer abstraction using automated Patient Record Forms (PRFs).

Objectives: This report presents an analysis of potential nonresponse bias in 2012 NAMCS estimates of physicians and visits to physicians. This analysis used two sets of physician-based estimates: one measuring the completion of the physician induction interview and another based on completing any PRF. Evaluation of visit response was measured by the percentage of expected PRFs completed. For each type of physician estimate, response was evaluated by (a) comparing percent distributions of respondents and nonrespondents by physician characteristics available for all in-scope sample physicians, (b) comparing response rates by physician characteristics with the national response rate, and (c) analyzing nonresponse bias after adjustments for nonresponse were applied in survey weights. For visit estimates, response was evaluated by (a) comparing the percent distributions of expected visits and completed visits, (b) comparing visit response rates by physician characteristics with the national visit response rate, and (c) analyzing visit-level nonresponse bias after adjustments for nonresponse were applied in visit survey weights. Finally, potential bias in the two physician-level estimates was computed by comparing them with those from an external survey.
View Article and Find Full Text PDF

Download full-text PDF

Source
February 2016

A Note on the Effect of Data Clustering on the Multiple-Imputation Variance Estimator: A Theoretical Addendum to , .

J Off Stat 2016 10;32(1):147-164. Epub 2016 Mar 10.

National Center for Health Statistics, Centers for Disease Control and Prevention, Hyattsville, MD, 20782, U.S.A.

Multiple imputation is a popular approach to handling missing data. Although it was originally motivated by survey nonresponse problems, it has been readily applied to other data settings. However, its general behavior still remains unclear when applied to survey data with complex sample designs, including clustering. Recently, Lewis et al. (2014) compared single- and multiple-imputation analyses for certain incomplete variables in the 2008 National Ambulatory Medicare Care Survey, which has a nationally representative, multistage, and clustered sampling design. Their study results suggested that the increase of the variance estimate due to multiple imputation compared with single imputation largely disappears for estimates with large design effects. We complement their empirical research by providing some theoretical reasoning. We consider data sampled from an equally weighted, single-stage cluster design and characterize the process using a balanced, one-way normal random-effects model. Assuming that the missingness is completely at random, we derive analytic expressions for the within- and between-multiple-imputation variance estimators for the mean estimator, and thus conveniently reveal the impact of design effects on these variance estimators. We propose approximations for the fraction of missing information in clustered samples, extending previous results for simple random samples. We discuss some generalizations of this research and its practical implications for data release by statistical agencies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1515/jos-2016-0007DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6444354PMC
March 2016

Determining Sufficient Number of Imputations Using Variance of Imputation Variances: Data from 2012 NAMCS Physician Workflow Mail Survey.

Appl Math (Irvine) 2014 Dec;5:3421-3430

National Center for Health Statistics, Centers for Disease Control and Prevention, Hyattsville, MD, USA.

How many imputations are sufficient in multiple imputations? The answer given by different researchers varies from as few as 2 - 3 to as many as hundreds. Perhaps no single number of imputations would fit all situations. In this study, , the minimally sufficient number of imputations, was determined based on the relationship between , the number of imputations, and , the standard error of imputation variances using the 2012 National Ambulatory Medical Care Survey (NAMCS) Physician Workflow mail survey. Five variables of various value ranges, variances, and missing data percentages were tested. For all variables tested, decreased as increased. The value above which the cost of further increase in would outweigh the benefit of reducing was recognized as the . This method has a potential to be used by anyone to determine that fits his or her own data situation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.4236/am.2014.521319DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4937882PMC
December 2014

Negative Binomials Regression Model in Analysis of Wait Time at Hospital Emergency Department.

Proc Am Stat Assoc 2014 ;0

National Center for Health Statistic, 3311 Toledo Road, Hyattsville, MD 20782.

Wait time is the differences between the time a patient arrives in the emergency department (ED) and the time an ED provider examines that patient. This study focuses on the development of a negative binomial model to examine factors associated with ED wait time using the National Hospital Ambulatory Medical Care Survey (NHAMCS). Conducted by National Center for Health Statistics (NCHS), NHAMCS has been gathering, analyzing, and disseminating information annually about visits made for medical care to hospital outpatient department and EDs since 1992. To analyze ED wait times, a negative binomial model was fit to the ED visit data using publically released micro data from the 2009 NHAMCS. In this model, the wait time is the dependent variable while hospital, patient, and visit characteristics are the independent variables. Wait time was collapsed into discrete values representing 15 minutes intervals. The findings are presented.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7183738PMC
January 2014

Lessons from the 2006 Louisiana health and population survey.

Disasters 2012 Apr 13;36(2):270-90. Epub 2011 Oct 13.

Louisiana Public Health Institute, New Orleans, LA 70112, USA.

The 2005 hurricane season caused extensive damage and induced a mass migration of approximately 1.1 million people from southern Louisiana in the United States. Current and accurate estimates of population size and demographics and an assessment of the critical needs for public services were required to guide recovery efforts. Since forecasts using pre-hurricane data may produce inaccurate estimates of the post-hurricane population, a household survey in 18 hurricane-affected parishes was conducted to provide timely and credible information on the size of these populations, their demographics and their condition. This paper describes the methods used, the challenges encountered, and the key factors for successful implementation. This post-disaster survey was unique because it identified the needs of the people in the affected parishes and quantified the number of people with these needs. Consequently, this survey established new population and health indicator baselines that otherwise would have not been available to guide the relief and recovery efforts in southern Louisiana.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/j.1467-7717.2011.01254.xDOI Listing
April 2012

Hospital preparedness for emergency response: United States, 2008.

Natl Health Stat Report 2011 Mar(37):1-14

U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics, Hyattsville, MD 20782, USA.

Objective: This report is a summary of hospital preparedness for responding to public health emergencies, including mass casualties and epidemics of naturally occurring diseases such as influenza.

Methods: Data are from an emergency response preparedness supplement to the 2008 National Hospital Ambulatory Medical Care Survey, which uses a national probability sample of nonfederal general and short-stay hospitals in the United States. Sample data were weighted to produce national estimates.
View Article and Find Full Text PDF

Download full-text PDF

Source
March 2011

Redesign and operation of the National Home And Hospice Care Survey, 2007.

Vital Health Stat 1 2010 Jul(53):1-192

U.S. Department of Health & Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics, Division of Health Statistics, Hyattsville, Maryland 20782, USA.

Objectives: This methods report provides an overview of the redesigned National Home and Hospice Care Survey (NHHCS) conducted in 2007. NHHCS is a national probability sample survey that collects data on U.S. home health and hospice care agencies, their staffs and services, and the people they serve. The redesigned survey included computerized data collection, greater survey content, increased sample sizes for current home health care patients and hospice care discharges, and a first-ever supplemental survey called the National Home Health Aide Survey.

Methods: The 2007 NHHCS was conducted between August 2007 and February 2008. NHHCS used a two-stage probability sampling design in which agencies providing home health and/or hospice care were sampled. Then, up to 10 current patients were sampled from each home health care agency, up to 10 discharges from each hospice care agency, and a combination of up to 10 patients/discharges from each agency that provided both home health and hospice care services. In-person interviews were conducted with agency directors and their designated staff; no interviews were conducted directly with patients. The survey instrument contained agency- and person-level modules, sampling modules, and a self-administered staffing questionnaire.

Results: Data were collected on 1036 agencies, 4683 current home health care patients, and 4733 hospice care discharges. The first-stage agency weighted response rate (for differential probabilities of selection) was 59%. The second-stage patient/discharge weighted response rate was 96%. Three public-use files were released: an agency-level file, a patient/discharge-level file, and a medication file. The files include sampling weights, which are necessary to generate national estimates, and design variables to enable users to calculate accurate standard errors.
View Article and Find Full Text PDF

Download full-text PDF

Source
July 2010

Linked surveys of health services utilization.

Stat Med 2007 Apr;26(8):1788-801

National Center for Health Statistics, CDC, 3311 Toledo Road, Room 5212, Hyattsville, MD 20782, USA.

The linked population/establishment survey (LS) of health services utilization is a two-phase sample survey that links the sample designs of the population sample survey (PS) and the health-care provider establishment sample survey (ES) of health services utilization. In Phase I, household respondents in the PS identify their health-care providers during a specified calendar period. In Phase II, health-care providers identified in Phase I report the variables of interest for all or a sample of their transactions with all households during the same calendar period. The LS has been proposed as a potential design alternative to the PS whenever the health-care transactions of interest are hard to find or enumerate in household surveys and as a potential design alternative to the ES whenever it is infeasible or expensive to construct or maintain complete sampling provider frames that list all health-care providers with good measures of provider size. Suppose that the non-sampling errors are ignorable, how do the LS, PS and ES sampling errors compare? This paper addresses that question by summarizing and extending recent research findings that compare expressions of the sampling variance of (1) the LS and PS of equivalent household sample size and (2) the LS and the ES of equivalent expected health-care provider and transaction sample sizes. The paper identifies the parameters contributing to the precision differences and assesses the conditions that favour the LS or one or the other surveys. Published in 2007 by John Wiley & Sons, Ltd.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/sim.2799DOI Listing
April 2007

Linked surveys of health services utilization.

Stat Med 2007 Apr;26(8):1788-801

National Center for Health Statistics, CDC, 3311 Toledo Road, Room 5212, Hyattsville, MD 20782, USA.

The linked population/establishment survey (LS) of health services utilization is a two-phase sample survey that links the sample designs of the population sample survey (PS) and the health-care provider establishment sample survey (ES) of health services utilization. In Phase I, household respondents in the PS identify their health-care providers during a specified calendar period. In Phase II, health-care providers identified in Phase I report the variables of interest for all or a sample of their transactions with all households during the same calendar period. The LS has been proposed as a potential design alternative to the PS whenever the health-care transactions of interest are hard to find or enumerate in household surveys and as a potential design alternative to the ES whenever it is infeasible or expensive to construct or maintain complete sampling provider frames that list all health-care providers with good measures of provider size. Suppose that the non-sampling errors are ignorable, how do the LS, PS and ES sampling errors compare? This paper addresses that question by summarizing and extending recent research findings that compare expressions of the sampling variance of (1) the LS and PS of equivalent household sample size and (2) the LS and the ES of equivalent expected health-care provider and transaction sample sizes. The paper identifies the parameters contributing to the precision differences and assesses the conditions that favour the LS or one or the other surveys. Published in 2007 by John Wiley & Sons, Ltd.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/sim.2799DOI Listing
April 2007

Effects of form length and item format on response patterns and estimates of physician office and hospital outpatient department visits. National Ambulatory Medical Care Survey and National Hospital Ambulatory Medical Care Survey, 2001.

Vital Health Stat 2 2005 Jun(139):1-32

Division of Health Care Statistics, US. Department of Health and Human Services, Centers for Disease Control and Prevention, Hyattsville, Maryland 20782, USA.

Objectives: This report describes effects due to form length and/or item formats on respondent cooperation and survey estimates.

Methods: Two formats were used for the Patient Record form for the 2001 NAMCS and OPD component of the NHAMCS: a short form with 70 subitems and a long form with 140 subitems. The short form also contained many write-in items and fit on a one-sided page. The long form contained more check boxes and other unique items and required a two-sided page. The NAMCS sample of physicians and NHAMCS sample of hospitals were randomly divided into two half samples and randomly assigned to either the short or long form. Unit and item nonresponse rates, as well as survey estimates from the two forms, were compared using SUDAAN software, which takes into account the complex sample design of the surveys.

Results: Physician unit response was lower for the long form overall and in certain geographic regions. Overall OPD unit response was not affected by form length, although there were some differences in favor of the long form for some types of hospitals. Despite having twice the number of check boxes on the long form as the short form, there was no difference in the percentage of visits with any diagnostic or screening services ordered or provided. However, visit estimates were usually higher for services collected with long form check-boxes than with (recoded) short form write-in entries. Finally, the study confirmed the feasibility of collecting certain items found only on the long form.

Conclusion: Overall, physician cooperation was more sensitive to form length than was OPD cooperation. The quality of the data was not affected by form length. Visit estimates were influenced by both content and item format.
View Article and Find Full Text PDF

Download full-text PDF

Source
June 2005

Guide to using masked design variables to estimate standard errors in public use files of the National Ambulatory Medical Care Survey and the National Hospital Ambulatory Medical Care Survey.

Inquiry 2003 ;40(4):401-15

Ambulatory Care Statistics Branch, Division of Health Care Statistics, National Center for Health Statistics, Hyattsville, MD 20872, USA.

Until recently, sample design information needed to correctly estimate standard errors from the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS) public use files was not released for confidentiality reasons. In 2002, masked sample design variables were released for the first time with the 1995-2000 NAMCS and NHAMCS public use files. This paper shows how to use masked design variables to compute standard errors in three software applications. It also discusses when masking overstates or understates "in-house" standard errors, and how masking affects the significance levels of point estimates and logistic regression parameters.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.5034/inquiryjrnl_40.4.401DOI Listing
April 2004

Combining estimates from complementary surveys: a case study using prevalence estimates from national health surveys of households and nursing homes.

Public Health Rep 2002 Jul-Aug;117(4):393-407

Office of Research and Methodology, National Center for Health Statistics, Hyattsville, MD 20782, USA.

Objectives: When a single survey does not cover a domain of interest, estimates from two or more complementary surveys can be combined to extend coverage. The purposes of this article are to discuss and demonstrate the benefits of combining estimates from complementary surveys and to provide a catalog of the analytic issues involved.

Methods: The authors present a case study in which data from the National Health Interview Survey and the National Nursing Home Survey were combined to obtain prevalence estimates for several chronic health conditions for the years 1985, 1995, and 1997. The combined prevalences were estimated by ratio estimation, and the associated variances were estimated by Taylor linearization. The survey weights, stratification, and clustering were reflected in the estimation procedures.

Results: In the case study, for the age group of 65 and older, the combined prevalence estimates for households and nursing homes are close to those for households alone. For the age group of 85 and older, however, the combined estimates are sometimes substantially different from the household estimates. Such differences are seen both for estimates within a single year and for estimates of trends across years.

Conclusions: Several general issues regarding comparability arise when there is a goal of combining complementary survey data. As illustrated by this case study, combining estimates can be very useful for improving coverage and avoiding misleading conclusions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1497448PMC
http://dx.doi.org/10.1093/phr/117.4.393DOI Listing
January 2003