Publications by authors named "James M Brown"

64 Publications

Radiomics Repeatability Pitfalls in a Scan-Rescan MRI Study of Glioblastoma.

Radiol Artif Intell 2021 Jan 16;3(1):e190199. Epub 2020 Dec 16.

Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology (K.V.H., J.B.P., A.L.B., K.C., P.S., J.M.B., M.C.P., B.R.R., J.K.C.), and Stephen E. and Catherine Pappas Center for Neuro-Oncology (T.T.B., E.R.G.), Massachusetts General Hospital, 149 13th St, Charlestown, MA 02129; and Harvard-MIT Division of Health Sciences and Technology, Cambridge, Mass (K.V.H., J.B.P., K.C.).

Purpose: To determine the influence of preprocessing on the repeatability and redundancy of radiomics features extracted using a popular open-source radiomics software package in a scan-rescan glioblastoma MRI study.

Materials And Methods: In this study, a secondary analysis of T2-weighted fluid-attenuated inversion recovery (FLAIR) and T1-weighted postcontrast images from 48 patients (mean age, 56 years [range, 22-77 years]) diagnosed with glioblastoma were included from two prospective studies (ClinicalTrials.gov NCT00662506 [2009-2011] and NCT00756106 [2008-2011]). All patients underwent two baseline scans 2-6 days apart using identical imaging protocols on 3-T MRI systems. No treatment occurred between scan and rescan, and tumors were essentially unchanged visually. Radiomic features were extracted by using PyRadiomics https://pyradiomics.readthedocs.io/ under varying conditions, including normalization strategies and intensity quantization. Subsequently, intraclass correlation coefficients were determined between feature values of the scan and rescan.

Results: Shape features showed a higher repeatability than intensity (adjusted < .001) and texture features (adjusted < .001) for both T2-weighted FLAIR and T1-weighted postcontrast images. Normalization improved the overlap between the region of interest intensity histograms of scan and rescan (adjusted < .001 for both T2-weighted FLAIR and T1-weighted postcontrast images), except in scans where brain extraction fails. As such, normalization significantly improves the repeatability of intensity features from T2-weighted FLAIR scans (adjusted = .003 [ score normalization] and adjusted = .002 [histogram matching]). The use of a relative intensity binning strategy as opposed to default absolute intensity binning reduces correlation between gray-level co-occurrence matrix features after normalization.

Conclusion: Both normalization and intensity quantization have an effect on the level of repeatability and redundancy of features, emphasizing the importance of both accurate reporting of methodology in radiomics articles and understanding the limitations of choices made in pipeline design. © RSNA, 2020See also the commentary by Tiwari and Verma in this issue.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1148/ryai.2020190199DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7845781PMC
January 2021

Shorter fixation durations for up-directed saccades during saccadic exploration: A meta-analysis.

J Eye Mov Res 2020 Mar 1;12(8). Epub 2020 Mar 1.

University of Georgia, Athens, USA.

Utilizing 23 datasets, we report a meta-analysis of an asymmetry in presaccadic fixation durations for saccades directed above and below eye fixation during saccadic exploration. For inclusion in the meta-analysis, saccadic exploration of complex visual displays had to have been made without gaze-contingent manipulations. Effect sizes for the asymmetry were quantified as Hedge's g. Pooled effect sizes indicated significant asymmetries such that during saccadic exploration in a variety of tasks, presaccadic fixation durations for saccades directed into the upper visual field were reliably shorter than presaccadic fixation durations for saccades into the lower visual field. It is contended that the asymmetry is robust and important for efforts aimed at modelling when a saccade is initiated as a function of ensuing saccade direction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.16910/jemr.12.8.5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7881898PMC
March 2020

Applications of Artificial Intelligence for Retinopathy of Prematurity Screening.

Pediatrics 2021 Mar;147(3)

Athinoula A. Martinos Center for Biomedical Imaging and Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts.

Objectives: Childhood blindness from retinopathy of prematurity (ROP) is increasing as a result of improvements in neonatal care worldwide. We evaluate the effectiveness of artificial intelligence (AI)-based screening in an Indian ROP telemedicine program and whether differences in ROP severity between neonatal care units (NCUs) identified by using AI are related to differences in oxygen-titrating capability.

Methods: External validation study of an existing AI-based quantitative severity scale for ROP on a data set of images from the Retinopathy of Prematurity Eradication Save Our Sight ROP telemedicine program in India. All images were assigned an ROP severity score (1-9) by using the Imaging and Informatics in Retinopathy of Prematurity Deep Learning system. We calculated the area under the receiver operating characteristic curve and sensitivity and specificity for treatment-requiring retinopathy of prematurity. Using multivariable linear regression, we evaluated the mean and median ROP severity in each NCU as a function of mean birth weight, gestational age, and the presence of oxygen blenders and pulse oxygenation monitors.

Results: The area under the receiver operating characteristic curve for detection of treatment-requiring retinopathy of prematurity was 0.98, with 100% sensitivity and 78% specificity. We found higher median (interquartile range) ROP severity in NCUs without oxygen blenders and pulse oxygenation monitors, most apparent in bigger infants (>1500 g and 31 weeks' gestation: 2.7 [2.5-3.0] vs 3.1 [2.4-3.8]; = .007, with adjustment for birth weight and gestational age).

Conclusions: Integration of AI into ROP screening programs may lead to improved access to care for secondary prevention of ROP and may facilitate assessment of disease epidemiology and NCU resources.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1542/peds.2020-016618DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7924138PMC
March 2021

LAMA: automated image analysis for the developmental phenotyping of mouse embryos.

Development 2021 Mar 24;148(18). Epub 2021 Mar 24.

Medical Research Council Harwell Institute, Harwell OX11 0RD, UK

Advanced 3D imaging modalities, such as micro-computed tomography (micro-CT), have been incorporated into the high-throughput embryo pipeline of the International Mouse Phenotyping Consortium (IMPC). This project generates large volumes of raw data that cannot be immediately exploited without significant resources of personnel and expertise. Thus, rapid automated annotation is crucial to ensure that 3D imaging data can be integrated with other multi-dimensional phenotyping data. We present an automated computational mouse embryo phenotyping pipeline that harnesses the large amount of wild-type control data available in the IMPC embryo pipeline in order to address issues of low mutant sample number as well as incomplete penetrance and variable expressivity. We also investigate the effect of developmental substage on automated phenotyping results. Designed primarily for developmental biologists, our software performs image pre-processing, registration, statistical analysis and segmentation of embryo images. We also present a novel anatomical E14.5 embryo atlas average and, using it with LAMA, show that we can uncover known and novel dysmorphology from two IMPC knockout lines.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1242/dev.192955DOI Listing
March 2021

A Failed Cardiac Surgery Program in an Underserved Minority Population County Reimagined: The Power of Partnership.

J Am Heart Assoc 2020 12 20;9(23):e018230. Epub 2020 Nov 20.

University of Maryland School of Medicine and University of Maryland Capital Region Health Baltimore MD.

Background Prince George's County Maryland, historically a medically underserved region, has a population of 909 327 and a high incidence of cardiometabolic syndrome and hypertension. Application of level I evidence practices in such areas requires the availability of highly advanced cardiovascular interventions. Donabedian principles of quality of care were applied to a failing cardiac surgery program. We hypothesized that a multidisciplinary application of this model supported by partnership with a university hospital system could result in improved quality care outcomes. Methods and Results A 6-month assessment and planning process commenced in July 2014. Preoperative, intraoperative, and postoperative protocols were developed before program restart. Staff education and training was conducted via team simulation and rehearsal sessions. A total of 425 patients underwent cardiac surgical procedures. Quality tracking of key performance measures was conducted, and 323 isolated coronary artery bypass grafting procedures were performed from July 2014 to December 2019. Key risk factors in our patient demographic were higher than the Society of Thoracic Surgeons national mean. Risk-adjusted outcome data yielded a mortality rate of 0.3% versus 2.2% nationally. The overall major complication rate was lower than expected at 7.1% compared with 11.5% nationally. Readmission rate was less than the Society of Thoracic Surgeons mean for isolated coronary artery bypass grafting (4.0% versus 10.1%, <0.0001). Significant differences in 6 key performance outcomes were noted, leading to a 3-star Society of Thoracic Surgeons designation in 7 of 8 tracking periods. Conclusions Excellent outcomes in cardiac surgery are attainable following program renovation in an underserved region in the setting of low volume. The principles and processes applied have potential broad application for any quality improvement effort.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1161/JAHA.120.018230DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7763790PMC
December 2020

Evaluation of a Deep Learning-Derived Quantitative Retinopathy of Prematurity Severity Scale.

Ophthalmology 2020 Oct 27. Epub 2020 Oct 27.

Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon; Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon. Electronic address:

Purpose: To evaluate the clinical usefulness of a quantitative deep learning-derived vascular severity score for retinopathy of prematurity (ROP) by assessing its correlation with clinical ROP diagnosis and by measuring clinician agreement in applying a novel scale.

Design: Analysis of existing database of posterior pole fundus images and corresponding ophthalmoscopic examinations using 2 methods of assigning a quantitative scale to vascular severity.

Participants: Images were from clinical examinations of patients in the Imaging and Informatics in ROP Consortium. Four ophthalmologists and 1 study coordinator evaluated vascular severity on a scale from 1 to 9.

Methods: A quantitative vascular severity score (1-9) was applied to each image using a deep learning algorithm. A database of 499 images was developed for assessment of interobserver agreement.

Main Outcome Measures: Distribution of deep learning-derived vascular severity scores with the clinical assessment of zone (I, II, or III), stage (0, 1, 2, or 3), and extent (<3 clock hours, 3-6 clock hours, and >6 clock hours) of stage 3 evaluated using multivariate linear regression and weighted κ values and Pearson correlation coefficients for interobserver agreement on a 1-to-9 vascular severity scale.

Results: For deep learning analysis, a total of 6344 clinical examinations were analyzed. A higher deep learning-derived vascular severity score was associated with more posterior disease, higher disease stage, and higher extent of stage 3 disease (P < 0.001 for all). For a given ROP stage, the vascular severity score was higher in zone I than zones II or III (P < 0.001). Multivariate regression found zone, stage, and extent all were associated independently with the severity score (P < 0.001 for all). For interobserver agreement, the mean ± standard deviation weighted κ value was 0.67 ± 0.06, and the Pearson correlation coefficient ± standard deviation was 0.88 ± 0.04 on the use of a 1-to-9 vascular severity scale.

Conclusions: A vascular severity scale for ROP seems feasible for clinical adoption; corresponds with zone, stage, extent of stage 3, and plus disease; and facilitates the use of objective technology such as deep learning to improve the consistency of ROP diagnosis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ophtha.2020.10.025DOI Listing
October 2020

Plus Disease in Retinopathy of Prematurity: Convolutional Neural Network Performance Using a Combined Neural Network and Feature Extraction Approach.

Transl Vis Sci Technol 2020 02 14;9(2):10. Epub 2020 Feb 14.

Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.

Purpose: Retinopathy of prematurity (ROP), a leading cause of childhood blindness, is diagnosed by clinical ophthalmoscopic examinations or reading retinal images. Plus disease, defined as abnormal tortuosity and dilation of the posterior retinal blood vessels, is the most important feature to determine treatment-requiring ROP. We aimed to create a complete, publicly available and feature-extraction-based pipeline, I-ROP ASSIST, that achieves convolutional neural network (CNN)-like performance when diagnosing plus disease from retinal images.

Methods: We developed two datasets containing 100 and 5512 posterior retinal images, respectively. After segmenting retinal vessels, we detected the vessel centerlines. Then, we extracted features relevant to ROP, including tortuosity and dilation measures, and used these features in the classifiers including logistic regression, support vector machine and neural networks to assess a severity score for the input. We tested our system with fivefold cross-validation and calculated the area under the curve (AUC) metric for each classifier and dataset.

Results: For predicting plus versus not-plus categories, we achieved 99% and 94% AUC on the first and second datasets, respectively. For predicting pre-plus or worse versus normal categories, we achieved 99% and 88% AUC on the first and second datasets, respectively. The CNN method achieved 98% and 94% for predicting two categories on the second dataset.

Conclusions: Our system combining automatic retinal vessel segmentation, tracing, feature extraction and classification is able to diagnose plus disease in ROP with CNN-like performance.

Translational Relevance: The high performance of I-ROP ASSIST suggests potential applications in automated and objective diagnosis of plus disease.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1167/tvst.9.2.10DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7346878PMC
February 2020

When figure-ground segregation fails: Exploring antagonistic interactions in figure-ground perception.

Atten Percept Psychophys 2020 Oct;82(7):3618-3635

Department of Psychology, University of Georgia, Athens, GA, 30602-3013, USA.

Perceptual fading of an artificial scotoma can be viewed as a failure of figure-ground segregation, providing a useful tool for investigating possible mechanisms and processes involved in figure-ground perception. Weisstein's antagonistic magnocellular/parvocellular stream figure-ground model proposes P stream activity encodes figure, and M stream activity encodes background. Where a boundary separates two regions, the region that is perceived as figure or ground is determined by the outcome of antagonism between M and P activity within each region and across the boundary between them. The region with the relatively stronger P "figure signal" is perceived as figure, and the region with the relatively stronger M "ground signal" is perceived as ground. From this perspective, fading occurs when the figure signal is overwhelmed by the ground signal. Strengthening the figure signal or weakening the ground signal should make the figure more resistant to fading. Based on research showing that red light suppresses M activity and short wavelength sensitive S-cones provide minimal input to M cells, we used red and blue light to reduce M activity in both figure and ground. The time to fade from stimulus onset until the figure completely disappeared was measured. Every combination of gray, green, red, and blue as figure and/or ground was tested. Compared with gray and green light, fade times were greatest when red or blue light either strengthened the figure signal by reducing M activity in the figure, or weakened the ground signal by reducing M activity in ground. The results support a dynamic antagonistic relationship between M and P activity contributing to figure-ground perception as envisioned in Weisstein's model.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3758/s13414-020-02097-wDOI Listing
October 2020

Variability in Plus Disease Identified Using a Deep Learning-Based Retinopathy of Prematurity Severity Scale.

Ophthalmol Retina 2020 10 4;4(10):1016-1021. Epub 2020 May 4.

Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon. Electronic address:

Purpose: Retinopathy of prematurity is a leading cause of childhood blindness worldwide, but clinical diagnosis is subjective, which leads to treatment differences. Our goal was to determine objective differences in the diagnosis of plus disease between clinicians using an automated retinopathy of prematurity (ROP) vascular severity score.

Design: This retrospective cohort study used data from the Imaging and Informatics in ROP Consortium, which comprises 8 tertiary care centers in North America. Fundus photographs of all infants undergoing ROP screening examinations between July 1, 2011, and December 31, 2016, were obtained.

Participants: Infants meeting ROP screening criteria who were diagnosed with plus disease and treatment initiated by an examining physician based on ophthalmoscopic examination results.

Methods: An ROP severity score (1-9) was generated for each image using a deep learning (DL) algorithm.

Main Outcome Measures: The mean, median, and range of ROP vascular severity scores overall and for each examiner when the diagnosis of plus disease was made.

Results: A total of 5255 clinical examinations in 871 babies were analyzed. Of these, 168 eyes were diagnosed with plus disease by 11 different examiners and were included in the study. The mean ± standard deviation vascular severity score for patients diagnosed with plus disease was 7.4 ± 1.9, median was 8.5 (interquartile range, 5.8-8.9), and range was 1.1 to 9.0. Within some examiners, variability in the level of vascular severity diagnosed as plus disease was present, and 1 examiner routinely diagnosed plus disease in patients with less severe disease than the other examiners (P < 0.01).

Conclusions: We observed variability both between and within examiners in the diagnosis of plus disease using DL. Prospective evaluation of clinical trial data using an objective measurement of vascular severity may help to define better the minimum necessary level of vascular severity for the diagnosis of plus disease or how other clinical features such as zone, stage, and extent of peripheral disease ought to be incorporated in treatment decisions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.oret.2020.04.022DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7867469PMC
October 2020

Siamese neural networks for continuous disease severity evaluation and change detection in medical imaging.

NPJ Digit Med 2020 26;3:48. Epub 2020 Mar 26.

1Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA USA.

Using medical images to evaluate disease severity and change over time is a routine and important task in clinical decision making. Grading systems are often used, but are unreliable as domain experts disagree on disease severity category thresholds. These discrete categories also do not reflect the underlying continuous spectrum of disease severity. To address these issues, we developed a convolutional Siamese neural network approach to evaluate disease severity at single time points and change between longitudinal patient visits on a continuous spectrum. We demonstrate this in two medical imaging domains: retinopathy of prematurity (ROP) in retinal photographs and osteoarthritis in knee radiographs. Our patient cohorts consist of 4861 images from 870 patients in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) cohort study and 10,012 images from 3021 patients in the Multicenter Osteoarthritis Study (MOST), both of which feature longitudinal imaging data. Multiple expert clinician raters ranked 100 retinal images and 100 knee radiographs from excluded test sets for severity of ROP and osteoarthritis, respectively. The Siamese neural network output for each image in comparison to a pool of normal reference images correlates with disease severity rank ( = 0.87 for ROP and  = 0.89 for osteoarthritis), both within and between the clinical grading categories. Thus, this output can represent the continuous spectrum of disease severity at any single time point. The difference in these outputs can be used to show change over time. Alternatively, paired images from the same patient at two time points can be directly compared using the Siamese neural network, resulting in an additional continuous measure of change between images. Importantly, our approach does not require manual localization of the pathology of interest and requires only a binary label for training (same versus different). The location of disease and site of change detected by the algorithm can be visualized using an occlusion sensitivity map-based approach. For a longitudinal binary change detection task, our Siamese neural networks achieve test set receiving operator characteristic area under the curves (AUCs) of up to 0.90 in evaluating ROP or knee osteoarthritis change, depending on the change detection strategy. The overall performance on this binary task is similar compared to a conventional convolutional deep-neural network trained for multi-class classification. Our results demonstrate that convolutional Siamese neural networks can be a powerful tool for evaluating the continuous spectrum of disease severity and change in medical imaging.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41746-020-0255-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7099081PMC
March 2020

The influence of object structure on visual short-term memory for multipart objects.

Atten Percept Psychophys 2020 May;82(4):1613-1631

University of Georgia, Athens, GA, USA.

Numerous studies have shown that more visual features can be stored in visual short-term memory (VSTM) when they are encoded from fewer objects (Luck & Vogel, 1997, Nature, 390, 279-281; Olson & Jiang, 2002, Perception & Psychophysics, 64[7], 1055-1067). This finding has been consistent for simple objects with one surface and one boundary contour, but very few experiments have shown a clear performance benefit when features are organized as multipart objects versus spatially dispersed single-feature objects. Some researchers have suggested multipart object integration is not mandatory because of the potential ambiguity of the display (Balaban & Luria, 2015, Cortex, 26(5), 2093-2104; Luria & Vogel, 2014, Journal of Cognitive Neuroscience, 26[8], 1819-1828). For example, a white bar across the middle of a red circle could be interpreted as two objects, a white bar occluding a red circle, or as a single two-colored object. We explore whether an object benefit can be found by disambiguating the figure-ground organization of multipart objects using a luminance gradient and linear perspective to create the appearance of a unified surface. Also, we investigated memory for objects with a visual feature indicated by a hole, rather than an additional surface on the object. Results indicate the organization of multipart objects can influence VSTM performance, but the effect is driven by how the specific organization allows for use of global ensemble statistics of the memory array rather than a memory benefit for local object representations.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3758/s13414-019-01957-4DOI Listing
May 2020

Monitoring Disease Progression With a Quantitative Severity Scale for Retinopathy of Prematurity Using Deep Learning.

JAMA Ophthalmol 2019 Jul 3. Epub 2019 Jul 3.

Department of Ophthalmology, Casey Eye institute, Oregon Health & Science University, Portland.

Importance: Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide, but clinical diagnosis is subjective and qualitative.

Objective: To describe a quantitative ROP severity score derived using a deep learning algorithm designed to evaluate plus disease and to assess its utility for objectively monitoring ROP progression.

Design, Setting, And Participants: This retrospective cohort study included images from 5255 clinical examinations of 871 premature infants who met the ROP screening criteria of the Imaging and Informatics in ROP (i-ROP) Consortium, which comprises 9 tertiary care centers in North America, from July 1, 2011, to December 31, 2016. Data analysis was performed from July 2017 to May 2018.

Exposure: A deep learning algorithm was used to assign a continuous ROP vascular severity score from 1 (most normal) to 9 (most severe) at each examination based on a single posterior photograph compared with a reference standard diagnosis (RSD) simplified into 4 categories: no ROP, mild ROP, type 2 ROP or pre-plus disease, or type 1 ROP. Disease course was assessed longitudinally across multiple examinations for all patients.

Main Outcomes And Measures: Mean ROP vascular severity score progression over time compared with the RSD.

Results: A total of 5255 clinical examinations from 871 infants (mean [SD] gestational age, 27.0 [2.0] weeks; 493 [56.6%] male; mean [SD] birth weight, 949 [271] g) were analyzed. The median severity scores for each category were as follows: 1.1 (interquartile range [IQR], 1.0-1.5) (no ROP), 1.5 (IQR, 1.1-3.4) (mild ROP), 4.6 (IQR, 2.4-5.3) (type 2 and pre-plus), and 7.5 (IQR, 5.0-8.7) (treatment-requiring ROP) (P < .001). When the long-term differences in the median severity scores across time between the eyes progressing to treatment and those who did not eventually require treatment were compared, the median score was higher in the treatment group by 0.06 at 30 to 32 weeks, 0.75 at 32 to 34 weeks, 3.56 at 34 to 36 weeks, 3.71 at 36 to 38 weeks, and 3.24 at 38 to 40 weeks postmenstrual age (P < .001 for all comparisons).

Conclusions And Relevance: The findings suggest that the proposed ROP vascular severity score is associated with category of disease at a given point in time and clinical progression of ROP in premature infants. Automated image analysis may be used to quantify clinical disease progression and identify infants at high risk for eventually developing treatment-requiring ROP. This finding has implications for quality and delivery of ROP care and for future approaches to disease classification.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamaophthalmol.2019.2433DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6613341PMC
July 2019

A Quantitative Severity Scale for Retinopathy of Prematurity Using Deep Learning to Monitor Disease Regression After Treatment.

JAMA Ophthalmol 2019 Jul 3. Epub 2019 Jul 3.

Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland.

Importance: Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide, but treatment failure and disease recurrence are important causes of adverse outcomes in patients with treatment-requiring ROP (TR-ROP).

Objectives: To apply an automated ROP vascular severity score obtained using a deep learning algorithm and to assess its utility for objectively monitoring ROP regression after treatment.

Design, Setting, And Participants: This retrospective cohort study used data from the Imaging and Informatics in ROP consortium, which comprises 9 tertiary referral centers in North America that screen high volumes of at-risk infants for ROP. Images of 5255 clinical eye examinations from 871 infants performed between July 2011 and December 2016 were assessed for eligibility in the present study. The disease course was assessed with time across the numerous examinations for patients with TR-ROP. Infants born prematurely meeting screening criteria for ROP who developed TR-ROP and who had images captured within 4 weeks before and after treatment as well as at the time of treatment were included.

Main Outcomes And Measures: The primary outcome was mean (SD) ROP vascular severity score before, at time of, and after treatment. A deep learning classifier was used to assign a continuous ROP vascular severity score, which ranged from 1 (normal) to 9 (most severe), at each examination. A secondary outcome was the difference in ROP vascular severity score among eyes treated with laser or the vascular endothelial growth factor antagonist bevacizumab. Differences between groups for both outcomes were assessed using unpaired 2-tailed t tests with Bonferroni correction.

Results: Of 5255 examined eyes, 91 developed TR-ROP, of which 46 eyes met the inclusion criteria based on the available images. The mean (SD) birth weight of those patients was 653 (185) g, with a mean (SD) gestational age of 24.9 (1.3) weeks. The mean (SD) ROP vascular severity scores significantly increased 2 weeks prior to treatment (4.19 [1.75]), peaked at treatment (7.43 [1.89]), and decreased for at least 2 weeks after treatment (4.00 [1.88]) (all P < .001). Eyes requiring retreatment with laser had higher ROP vascular severity scores at the time of initial treatment compared with eyes receiving a single treatment (P < .001).

Conclusions And Relevance: This quantitative ROP vascular severity score appears to consistently reflect clinical disease progression and posttreatment regression in eyes with TR-ROP. These study results may have implications for the monitoring of patients with ROP for treatment failure and disease recurrence and for determining the appropriate level of disease severity for primary treatment in eyes with aggressive disease.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamaophthalmol.2019.2442DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6613298PMC
July 2019

Automatic assessment of glioma burden: a deep learning algorithm for fully automated volumetric and bidimensional measurement.

Neuro Oncol 2019 11;21(11):1412-1422

Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA.

Background: Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal fluid attenuated inversion recovery (FLAIR) hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bidimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO).

Methods: Two cohorts of patients were used for this study. One consisted of 843 preoperative MRIs from 843 patients with low- or high-grade gliomas from 4 institutions and the second consisted of 713 longitudinal postoperative MRI visits from 54 patients with newly diagnosed glioblastomas (each with 2 pretreatment "baseline" MRIs) from 1 institution.

Results: The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectively, on the cohort of postoperative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for preoperative FLAIR hyperintensity, postoperative FLAIR hyperintensity, and postoperative contrast-enhancing tumor volumes, respectively. Lastly, the ICCs for comparing manually and automatically derived longitudinal changes in tumor burden were 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively.

Conclusions: Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex posttreatment settings, although further validation in multicenter clinical trials will be needed prior to widespread implementation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/neuonc/noz106DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6827825PMC
November 2019

Automated Fundus Image Quality Assessment in Retinopathy of Prematurity Using Deep Convolutional Neural Networks.

Ophthalmol Retina 2019 05 31;3(5):444-450. Epub 2019 Jan 31.

Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon; Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, Oregon. Electronic address:

Purpose: Accurate image-based ophthalmic diagnosis relies on fundus image clarity. This has important implications for the quality of ophthalmic diagnoses and for emerging methods such as telemedicine and computer-based image analysis. The purpose of this study was to implement a deep convolutional neural network (CNN) for automated assessment of fundus image quality in retinopathy of prematurity (ROP).

Design: Experimental study.

Participants: Retinal fundus images were collected from preterm infants during routine ROP screenings.

Methods: Six thousand one hundred thirty-nine retinal fundus images were collected from 9 academic institutions. Each image was graded for quality (acceptable quality [AQ], possibly acceptable quality [PAQ], or not acceptable quality [NAQ]) by 3 independent experts. Quality was defined as the ability to assess an image confidently for the presence of ROP. Of the 6139 images, NAQ, PAQ, and AQ images represented 5.6%, 43.6%, and 50.8% of the image set, respectively. Because of low representation of NAQ images in the data set, images labeled NAQ were grouped into the PAQ category, and a binary CNN classifier was trained using 5-fold cross-validation on 4000 images. A test set of 2109 images was held out for final model evaluation. Additionally, 30 images were ranked from worst to best quality by 6 experts via pairwise comparisons, and the CNN's ability to rank quality, regardless of quality classification, was assessed.

Main Outcome Measures: The CNN performance was evaluated using area under the receiver operating characteristic curve (AUC). A Spearman's rank correlation was calculated to evaluate the overall ability of the CNN to rank images from worst to best quality as compared with experts.

Results: The mean AUC for 5-fold cross-validation was 0.958 (standard deviation, 0.005) for the diagnosis of AQ versus PAQ images. The AUC was 0.965 for the test set. The Spearman's rank correlation coefficient on the set of 30 images was 0.90 as compared with the overall expert consensus ranking.

Conclusions: This model accurately assessed retinal fundus image quality in a comparable manner with that of experts. This fully automated model has potential for application in clinical settings, telemedicine, and computer-based image analysis in ROP and for generalizability to other ophthalmic diseases.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.oret.2019.01.015DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6501831PMC
May 2019

Deep Learning for Image Quality Assessment of Fundus Images in Retinopathy of Prematurity.

AMIA Annu Symp Proc 2018 5;2018:1224-1232. Epub 2018 Dec 5.

Medical Informatics & Clinical Epidemiology, and.

Accurate image-based medical diagnosis relies upon adequate image quality and clarity. This has important implications for clinical diagnosis, and for emerging methods such as telemedicine and computer-based image analysis. In this study, we trained a convolutional neural network (CNN) to automatically assess the quality of retinal fundus images in a representative ophthalmic disease, retinopathy of prematurity (ROP). 6,043 wide-angle fundus images were collected from preterm infants during routine ROP screening examinations. Images were assessed by clinical experts for quality regarding ability to diagnose ROP accurately, and were labeled "acceptable" or "not acceptable." The CNN training, validation and test sets consisted of 2,770 images, 200 images, and 3,073 images, respectively. Test set accuracy was 89.1%, with area under the receiver operating curve equal to 0.964, and area under the precision-recall curve equal to 0.966. Taken together, our CNN shows promise as a useful prescreening method for telemedicine and computer-based image analysis applications. We feel this methodology is generalizable to all clinical domains involving image-based diagnosis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6371336PMC
December 2019

Deciphering the genomic, epigenomic, and transcriptomic landscapes of pre-invasive lung cancer lesions.

Nat Med 2019 03 21;25(3):517-525. Epub 2019 Jan 21.

Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK.

The molecular alterations that occur in cells before cancer is manifest are largely uncharted. Lung carcinoma in situ (CIS) lesions are the pre-invasive precursor to squamous cell carcinoma. Although microscopically identical, their future is in equipoise, with half progressing to invasive cancer and half regressing or remaining static. The cellular basis of this clinical observation is unknown. Here, we profile the genomic, transcriptomic, and epigenomic landscape of CIS in a unique patient cohort with longitudinally monitored pre-invasive disease. Predictive modeling identifies which lesions will progress with remarkable accuracy. We identify progression-specific methylation changes on a background of widespread heterogeneity, alongside a strong chromosomal instability signature. We observed mutations and copy number changes characteristic of cancer and chart their emergence, offering a window into early carcinogenesis. We anticipate that this new understanding of cancer precursor biology will improve early detection, reduce overtreatment, and foster preventative therapies targeting early clonal events in lung cancer.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41591-018-0323-0DOI Listing
March 2019

Evaluation of a deep learning image assessment system for detecting severe retinopathy of prematurity.

Br J Ophthalmol 2018 Nov 23. Epub 2018 Nov 23.

Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA

Background: Prior work has demonstrated the near-perfect accuracy of a deep learning retinal image analysis system for diagnosing plus disease in retinopathy of prematurity (ROP). Here we assess the screening potential of this scoring system by determining its ability to detect all components of ROP diagnosis.

Methods: Clinical examination and fundus photography were performed at seven participating centres. A deep learning system was trained to detect plus disease, generating a quantitative assessment of retinal vascular abnormality (the i-ROP plus score) on a 1-9 scale. Overall ROP disease category was established using a consensus reference standard diagnosis combining clinical and image-based diagnosis. Experts then ranked ordered a second data set of 100 posterior images according to overall ROP severity.

Results: 4861 examinations from 870 infants were analysed. 155 examinations (3%) had a reference standard diagnosis of type 1 ROP. The i-ROP deep learning (DL) vascular severity score had an area under the receiver operating curve of 0.960 for detecting type 1 ROP. Establishing a threshold i-ROP DL score of 3 conferred 94% sensitivity, 79% specificity, 13% positive predictive value and 99.7% negative predictive value for type 1 ROP. There was strong correlation between expert rank ordering of overall ROP severity and the i-ROP DL vascular severity score (Spearman correlation coefficient=0.93; p<0.0001).

Conclusion: The i-ROP DL system accurately identifies diagnostic categories and overall disease severity in an automated fashion, after being trained only on posterior pole vascular morphology. These data provide proof of concept that a deep learning screening platform could improve objectivity of ROP diagnosis and accessibility of screening.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1136/bjophthalmol-2018-313156DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7880608PMC
November 2018

ISLES 2016 and 2017-Benchmarking Ischemic Stroke Lesion Outcome Prediction Based on Multispectral MRI.

Front Neurol 2018 13;9:679. Epub 2018 Sep 13.

Medical Image Analysis, Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland.

Performance of models highly depend not only on the used algorithm but also the data set it was applied to. This makes the comparison of newly developed tools to previously published approaches difficult. Either researchers need to implement others' algorithms first, to establish an adequate benchmark on their data, or a direct comparison of new and old techniques is infeasible. The Ischemic Stroke Lesion Segmentation (ISLES) challenge, which has ran now consecutively for 3 years, aims to address this problem of comparability. ISLES 2016 and 2017 focused on lesion outcome prediction after ischemic stroke: By providing a uniformly pre-processed data set, researchers from all over the world could apply their algorithm directly. A total of nine teams participated in ISLES 2015, and 15 teams participated in ISLES 2016. Their performance was evaluated in a fair and transparent way to identify the state-of-the-art among all submissions. Top ranked teams almost always employed deep learning tools, which were predominately convolutional neural networks (CNNs). Despite the great efforts, lesion outcome prediction persists challenging. The annotated data set remains publicly available and new approaches can be compared directly via the online evaluation system, serving as a continuing benchmark (www.isles-challenge.org).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fneur.2018.00679DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6146088PMC
September 2018

Contrast sensitivity indicates processing level of visual illusions.

J Exp Psychol Hum Percept Perform 2018 Oct 9;44(10):1557-1566. Epub 2018 Jul 9.

Department of Psychology.

A nearly linear contrast response function (CRF) is found in the lower level striate cortex whereas a steep, nonlinear increase at lower contrasts that gradually increases toward response saturation for higher contrasts is found in the higher level extrastriate cortex. This change of CRFs along the ventral cortical pathway indicates a shift from stimulus- and energy-dependent coding at lower levels to percept- and information-dependent coding at higher levels. The increase of nonlinearity at higher levels optimizes the extraction of perceptual information by amplifying responses to the ubiquitous low-contrast inputs in the environment. We used this difference of CRFs between lower and higher levels, particularly at lower contrasts (.0 to .30), as a tool to investigate examples of 2 lower level (simultaneous brightness and simultaneous tilt) and 2 higher level (Poggendorff and Ponzo) illusions. As predicted, the Poggendorff and Ponzo illusions yielded strong nonlinear increases in their CRFs compared to the more linear functions found for the simultaneous-brightness and simultaneous-tilt illusions. We conclude that the Poggendorff-Ponzo illusions rely more heavily on high-level, percept-dependent cortical processing than do the simultaneous-brightness-simultaneous-tilt illusions and, more generally, that differences between contrast-dependent changes may be a useful tool in determining the relative level of cortical processing of many other visual effects. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1037/xhp0000554DOI Listing
October 2018

Automated Diagnosis of Plus Disease in Retinopathy of Prematurity Using Deep Convolutional Neural Networks.

JAMA Ophthalmol 2018 07;136(7):803-810

Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland.

Importance: Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide. The decision to treat is primarily based on the presence of plus disease, defined as dilation and tortuosity of retinal vessels. However, clinical diagnosis of plus disease is highly subjective and variable.

Objective: To implement and validate an algorithm based on deep learning to automatically diagnose plus disease from retinal photographs.

Design, Setting, And Participants: A deep convolutional neural network was trained using a data set of 5511 retinal photographs. Each image was previously assigned a reference standard diagnosis (RSD) based on consensus of image grading by 3 experts and clinical diagnosis by 1 expert (ie, normal, pre-plus disease, or plus disease). The algorithm was evaluated by 5-fold cross-validation and tested on an independent set of 100 images. Images were collected from 8 academic institutions participating in the Imaging and Informatics in ROP (i-ROP) cohort study. The deep learning algorithm was tested against 8 ROP experts, each of whom had more than 10 years of clinical experience and more than 5 peer-reviewed publications about ROP. Data were collected from July 2011 to December 2016. Data were analyzed from December 2016 to September 2017.

Exposures: A deep learning algorithm trained on retinal photographs.

Main Outcomes And Measures: Receiver operating characteristic analysis was performed to evaluate performance of the algorithm against the RSD. Quadratic-weighted κ coefficients were calculated for ternary classification (ie, normal, pre-plus disease, and plus disease) to measure agreement with the RSD and 8 independent experts.

Results: Of the 5511 included retinal photographs, 4535 (82.3%) were graded as normal, 805 (14.6%) as pre-plus disease, and 172 (3.1%) as plus disease, based on the RSD. Mean (SD) area under the receiver operating characteristic curve statistics were 0.94 (0.01) for the diagnosis of normal (vs pre-plus disease or plus disease) and 0.98 (0.01) for the diagnosis of plus disease (vs normal or pre-plus disease). For diagnosis of plus disease in an independent test set of 100 retinal images, the algorithm achieved a sensitivity of 93% with 94% specificity. For detection of pre-plus disease or worse, the sensitivity and specificity were 100% and 94%, respectively. On the same test set, the algorithm achieved a quadratic-weighted κ coefficient of 0.92 compared with the RSD, outperforming 6 of 8 ROP experts.

Conclusions And Relevance: This fully automated algorithm diagnosed plus disease in ROP with comparable or better accuracy than human experts. This has potential applications in disease detection, monitoring, and prognosis in infants at risk of ROP.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamaophthalmol.2018.1934DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6136045PMC
July 2018

Can Contrast-Response Functions Indicate Visual Processing Levels?

Vision (Basel) 2018 Mar 1;2(1). Epub 2018 Mar 1.

Department of Psychology, University of Georgia, Athens, GA 30602, USA.

Many visual effects are believed to be processed at several functional and anatomical levels of cortical processing. Determining if and how the levels contribute differentially to these effects is a leading problem in visual perception and visual neuroscience. We review and analyze a combination of extant psychophysical findings in the context of neurophysiological and brain-imaging results. Specifically using findings relating to visual illusions, crowding, and masking as exemplary cases, we develop a theoretical rationale for showing how relative levels of cortical processing contributing to these effects can already be deduced from the psychophysically determined functions relating respectively the illusory, crowding and masking strengths to the contrast of the illusion inducers, of the flankers producing the crowding, and of the mask. The wider implications of this rationale show how it can help to settle or clarify theoretical and interpretive inconsistencies and how it can further psychophysical, brain-recording and brain-imaging research geared to explore the relative functional and cortical levels at which conscious and unconscious processing of visual information occur. Our approach also allows us to make some specific predictions for future studies, whose results will provide empirical tests of its validity.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/vision2010014DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6835543PMC
March 2018

Corrigendum: High-throughput discovery of novel developmental phenotypes.

Nature 2017 11 8;551(7680):398. Epub 2017 Nov 8.

This corrects the article DOI: 10.1038/nature19356.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/nature24643DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5849394PMC
November 2017

Detection and characterisation of bone destruction in murine rheumatoid arthritis using statistical shape models.

Med Image Anal 2017 Aug 23;40:30-43. Epub 2017 May 23.

School of Computer Science, University of Birmingham, Birmingham B15 2TT, U.K.. Electronic address:

Rheumatoid arthritis (RA) is an autoimmune disease in which chronic inflammation of the synovial joints can lead to destruction of cartilage and bone. Pre-clinical studies attempt to uncover the underlying causes by emulating the disease in genetically different mouse strains and characterising the nature and severity of bone shape changes as indicators of pathology. This paper presents a fully automated method for obtaining quantitative measurements of bone destruction from volumetric micro-CT images of a mouse hind paw. A statistical model of normal bone morphology derived from a training set of healthy examples serves as a template against which a given pathological sample is compared. Abnormalities in bone shapes are identified as deviations from the model statistics, characterised in terms of type (erosion / formation) and quantified in terms of severity (percentage affected bone area). The colour-coded magnitudes of the deviations superimposed on a three-dimensional rendering of the paw show at a glance the severity of malformations for the individual bones and joints. With quantitative data it is possible to derive population statistics characterising differences in bone malformations for different mouse strains and in different anatomical regions. The method was applied to data acquired from three different mouse strains. The derived quantitative indicators of bone destruction have shown agreement both with the subjective visual scores and with the previous biological findings. This suggests that pathological bone shape changes can be usefully and objectively identified as deviations from the model statistics.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2017.05.006DOI Listing
August 2017

Where did I come from? Where am I going? Functional differences in visual search fixation duration.

J Eye Mov Res 2017 Mar 4;10(1). Epub 2017 Mar 4.

University of Detroit Mercy, Detroit, Michigan, USA.

Real time simulation of visual search behavior can occur only if the control of fixation durations is sufficiently understood. Visual search studies have typically confounded pre- and post-saccadic influences on fixation duration. In the present study, pre- and post-saccadic influences on fixation durations were compared by considering saccade direction. Novel use of a gaze-contingent moving obstructer paradigm also addressed relative contributions of both influences to total fixation duration. As a function of saccade direction, pre-saccadic fixation durations exhibited a different pattern from post-saccadic fixation durations. Post-saccadic fixations were also more strongly influenced by peripheral obstruction than pre-saccadic fixation durations. This suggests that post-saccadic influences may contribute more to fixation durations than pre-saccadic influences. Together, the results demonstrate that it is insufficient to model the control of visual search fixation durations without consideration of pre- and post-saccadic influences.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.16910/jemr.10.1.5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7141091PMC
March 2017

A bioimage informatics platform for high-throughput embryo phenotyping.

Brief Bioinform 2018 01;19(1):41-51

High-throughput phenotyping is a cornerstone of numerous functional genomics projects. In recent years, imaging screens have become increasingly important in understanding gene-phenotype relationships in studies of cells, tissues and whole organisms. Three-dimensional (3D) imaging has risen to prominence in the field of developmental biology for its ability to capture whole embryo morphology and gene expression, as exemplified by the International Mouse Phenotyping Consortium (IMPC). Large volumes of image data are being acquired by multiple institutions around the world that encompass a range of modalities, proprietary software and metadata. To facilitate robust downstream analysis, images and metadata must be standardized to account for these differences. As an open scientific enterprise, making the data readily accessible is essential so that members of biomedical and clinical research communities can study the images for themselves without the need for highly specialized software or technical expertise. In this article, we present a platform of software tools that facilitate the upload, analysis and dissemination of 3D images for the IMPC. Over 750 reconstructions from 80 embryonic lethal and subviable lines have been captured to date, all of which are openly accessible at mousephenotype.org. Although designed for the IMPC, all software is available under an open-source licence for others to use and develop further. Ongoing developments aim to increase throughput and improve the analysis and dissemination of image data. Furthermore, we aim to ensure that images are searchable so that users can locate relevant images associated with genes, phenotypes or human diseases of interest.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/bib/bbw101DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5862285PMC
January 2018

High-throughput discovery of novel developmental phenotypes.

Nature 2016 09 14;537(7621):508-514. Epub 2016 Sep 14.

The Jackson Laboratory, Bar Harbor, Maine 04609, USA.

Approximately one-third of all mammalian genes are essential for life. Phenotypes resulting from knockouts of these genes in mice have provided tremendous insight into gene function and congenital disorders. As part of the International Mouse Phenotyping Consortium effort to generate and phenotypically characterize 5,000 knockout mouse lines, here we identify 410 lethal genes during the production of the first 1,751 unique gene knockouts. Using a standardized phenotyping platform that incorporates high-resolution 3D imaging, we identify phenotypes at multiple time points for previously uncharacterized genes and additional phenotypes for genes with previously reported mutant phenotypes. Unexpectedly, our analysis reveals that incomplete penetrance and variable expressivity are common even on a defined genetic background. In addition, we show that human disease genes are enriched for essential genes, thus providing a dataset that facilitates the prioritization and validation of mutations identified in clinical sequencing efforts.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/nature19356DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5295821PMC
September 2016

Variation in Red Blood Cell Transfusion Practices During Cardiac Operations Among Centers in Maryland: Results From a State Quality-Improvement Collaborative.

Ann Thorac Surg 2017 Jan 20;103(1):152-160. Epub 2016 Aug 20.

Division of Cardiac Surgery, Department of Medicine, The Johns Hopkins University School of Medicine, Baltimore, Maryland. Electronic address:

Background: Variation in red blood cell (RBC) transfusion practices exists at cardiac surgery centers across the nation. We tested the hypothesis that significant variation in RBC transfusion practices between centers in our state's cardiac surgery quality collaborative remains even after risk adjustment.

Methods: Using a multiinstitutional statewide database created by the Maryland Cardiac Surgery Quality Initiative (MCSQI), we included patient-level data from 8,141 patients undergoing isolated coronary artery bypass (CAB) or aortic valve replacement at 1 of 10 centers. Risk-adjusted multivariable logistic regression models were constructed to predict the need for any intraoperative RBC transfusion, as well as for any postoperative RBC transfusion, with anonymized center number included as a factor variable.

Results: Unadjusted intraoperative RBC transfusion probabilities at the 10 centers ranged from 13% to 60%; postoperative RBC transfusion probabilities ranged from 16% to 41%. After risk adjustment with demographic, comorbidity, and operative data, significant intercenter variability was documented (intraoperative probability range, 4% -59%; postoperative probability range, 13%-39%). When stratifying patients by preoperative hematocrit quartiles, significant variability in intraoperative transfusion probability was seen among all quartiles (lowest quartile: mean hematocrit value, 30.5% ± 4.1%, probability range, 17%-89%; highest quartile: mean hematocrit value, 44.8% ± 2.5%; probability range, 1%-35%).

Conclusions: Significant variation in intercenter RBC transfusion practices exists for both intraoperative and postoperative transfusions, even after risk adjustment, among our state's centers. Variability in intraoperative RBC transfusion persisted across quartiles of preoperative hematocrit values.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.athoracsur.2016.05.109DOI Listing
January 2017

Increasing task demand by obstructing object recognition increases boundary extension.

Psychon Bull Rev 2016 10;23(5):1497-1503

University of Georgia, Athens, GA, 30602, USA.

Individuals consistently remember seeing wider-angle versions of previously viewed scenes than actually existed. The multi-source model of boundary extension (BE) suggests many sources of information contribute to this visual memory error. Color diagnosticity is known to affect object recognition with poorer recognition for atypically versus typically colored objects. Scenes with low-color diagnostic main objects and two versions of scenes with high-color diagnostic main objects (typically and atypically colored) were tested to determine if the reduced ability to identify the main object in a scene influences BE. Scenes were presented to one group of participants for 46 ms and another group for 250 ms. Each scene was followed by a mask and a request for a recognition response concerning the identity of the main object. The scene was then immediately presented again for testing and participants rated it as depicting a more close-up view, more wide-angle, or the same view as before. The study demonstrates that poorer encoding of main objects in scenes leads to increased BE, but trial-by-trial recognition accuracy had no relationship to BE magnitude. This finding provides further insight into the impact of task demand and main object recognition on BE.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3758/s13423-016-1018-5DOI Listing
October 2016

Rapid Expansion of Human Epithelial Stem Cells Suitable for Airway Tissue Engineering.

Am J Respir Crit Care Med 2016 07;194(2):156-68

1 Lungs for Living Research Centre, UCL Respiratory, University College London, London, United Kingdom.

Rationale: Stem cell-based tracheal replacement represents an emerging therapeutic option for patients with otherwise untreatable airway diseases including long-segment congenital tracheal stenosis and upper airway tumors. Clinical experience demonstrates that restoration of mucociliary clearance in the lungs after transplantation of tissue-engineered grafts is critical, with preclinical studies showing that seeding scaffolds with autologous mucosa improves regeneration. High epithelial cell-seeding densities are required in regenerative medicine, and existing techniques are inadequate to achieve coverage of clinically suitable grafts.

Objectives: To define a scalable cell culture system to deliver airway epithelium to clinical grafts.

Methods: Human respiratory epithelial cells derived from endobronchial biopsies were cultured using a combination of mitotically inactivated fibroblasts and Rho-associated protein kinase (ROCK) inhibition using Y-27632 (3T3+Y). Cells were analyzed by immunofluorescence, quantitative polymerase chain reaction, and flow cytometry to assess airway stem cell marker expression. Karyotyping and multiplex ligation-dependent probe amplification were performed to assess cell safety. Differentiation capacity was tested in three-dimensional tracheospheres, organotypic cultures, air-liquid interface cultures, and an in vivo tracheal xenograft model. Ciliary function was assessed in air-liquid interface cultures.

Measurements And Main Results: 3T3-J2 feeder cells and ROCK inhibition allowed rapid expansion of airway basal cells. These cells were capable of multipotent differentiation in vitro, generating both ciliated and goblet cell lineages. Cilia were functional with normal beat frequency and pattern. Cultured cells repopulated tracheal scaffolds in a heterotopic transplantation xenograft model.

Conclusions: Our method generates large numbers of functional airway basal epithelial cells with the efficiency demanded by clinical transplantation, suggesting its suitability for use in tracheal reconstruction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1164/rccm.201507-1414OCDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5003214PMC
July 2016