Publications by authors named "Deniz Erdogmus"

117 Publications

Single-Examination Risk Prediction of Severe Retinopathy of Prematurity.

Pediatrics 2021 Nov 23. Epub 2021 Nov 23.

Departments of Ophthalmology.

Background And Objectives: Retinopathy of prematurity (ROP) is a leading cause of childhood blindness. Screening and treatment reduces this risk, but requires multiple examinations of infants, most of whom will not develop severe disease. Previous work has suggested that artificial intelligence may be able to detect incident severe disease (treatment-requiring retinopathy of prematurity [TR-ROP]) before clinical diagnosis. We aimed to build a risk model that combined artificial intelligence with clinical demographics to reduce the number of examinations without missing cases of TR-ROP.

Methods: Infants undergoing routine ROP screening examinations (1579 total eyes, 190 with TR-ROP) were recruited from 8 North American study centers. A vascular severity score (VSS) was derived from retinal fundus images obtained at 32 to 33 weeks' postmenstrual age. Seven ElasticNet logistic regression models were trained on all combinations of birth weight, gestational age, and VSS. The area under the precision-recall curve was used to identify the highest-performing model.

Results: The gestational age + VSS model had the highest performance (mean ± SD area under the precision-recall curve: 0.35 ± 0.11). On 2 different test data sets (n = 444 and n = 132), sensitivity was 100% (positive predictive value: 28.1% and 22.6%) and specificity was 48.9% and 80.8% (negative predictive value: 100.0%).

Conclusions: Using a single examination, this model identified all infants who developed TR-ROP, on average, >1 month before diagnosis with moderate to high specificity. This approach could lead to earlier identification of incident severe ROP, reducing late diagnosis and treatment while simultaneously reducing the number of ROP examinations and unnecessary physiologic stress for low-risk infants.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1542/peds.2021-051772DOI Listing
November 2021

Efficient TMS-Based Motor Cortex Mapping Using Gaussian Process Active Learning.

IEEE Trans Neural Syst Rehabil Eng 2021 30;29:1679-1689. Epub 2021 Aug 30.

Transcranial Magnetic Stimulation (TMS) can be used to map cortical motor topography by spatially sampling the sensorimotor cortex while recording Motor Evoked Potentials (MEP) with surface electromyography (EMG). Traditional sampling strategies are time-consuming and inefficient, as they ignore the fact that responsive sites are typically sparse and highly spatially correlated. An alternative approach, commonly employed when TMS mapping is used for presurgical planning, is to leverage the expertise of the coil operator to use MEPs elicited by previous stimuli as feedback to decide which loci to stimulate next. In this paper, we propose to automatically infer optimal future stimulus loci using active learning Gaussian Process-based sampling in place of user expertise. We first compare the user-guided (USRG) method to the traditional grid selection method and randomized sampling to verify that the USRG approach has superior performance. We then compare several novel active Gaussian Process (GP) strategies with the USRG approach. Experimental results using real data show that, as expected, the USRG method is superior to the grid and random approach in both time efficiency and MEP map accuracy. We also found that an active warped GP entropy and a GP random-based strategy performed equally as well as, or even better than, the USRG method. These methods were completely automatic, and succeeded in efficiently sampling the regions in which the MEP response variations are largely confined. This work provides the foundation for highly efficient, fully automatized TMS mapping, especially when considered in the context of advances in robotic coil operation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNSRE.2021.3105644DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8452135PMC
October 2021

Stochastic Mutual Information Gradient Estimation for Dimensionality Reduction Networks.

Inf Sci (N Y) 2021 Sep 20;570:298-305. Epub 2021 Apr 20.

Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA.

Feature ranking and selection is a widely used approach in various applications of supervised dimensionality reduction in discriminative machine learning. Nevertheless there exists significant evidence on feature ranking and selection algorithms based on any criterion leading to potentially sub-optimal solutions for class separability. In that regard, we introduce emerging information theoretic feature transformation protocols as an end-to-end neural network training approach. We present a dimensionality reduction network (MMINet) training procedure based on the stochastic estimate of the mutual information gradient. The network projects high-dimensional features onto an output feature space where lower dimensional representations of features carry maximum mutual information with their associated class labels. Furthermore, we formulate the training objective to be estimated non-parametrically with no distributional assumptions. We experimentally evaluate our method with applications to high-dimensional biological data sets, and relate it to conventional feature selection algorithms to form a special case of our approach.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ins.2021.04.066DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8274569PMC
September 2021

Geometric Analysis of Uncertainty Sampling for Dense Neural Network Layer.

IEEE Signal Process Lett 2021 9;28:867-871. Epub 2021 Apr 9.

Northeastern University Department of Electrical and Computer Engineering 409 Dana Research Center 360 Huntington Avenue Boston, MA 02115.

For model adaptation of fully connected neural network layers, we provide an information geometric and sample behavioral active learning uncertainty sampling objective analysis. We identify conditions under which several uncertainty-based methods have the same performance and show that such conditions are more likely to appear in the early stages of learning. We define riskier samples for adaptation, and demonstrate that, as the set of labeled samples increases, margin-based sampling outperforms other uncertainty sampling methods by preferentially selecting these risky samples. We support our derivations and illustrations with experiments using Meta-Dataset, a benchmark for few-shot learning. We compare uncertainty-based active learning objectives using features produced by SimpleCNAPS (a state-of-the-art few-shot classifier) as input for a fully-connected adaptation layer. Our results indicate that margin-based uncertainty sampling achieves similar performance as other uncertainty based sampling methods with fewer labelled samples as discussed in the novel geometric analysis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/lsp.2021.3072292DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8224399PMC
April 2021

EEG-based texture roughness classification in active tactile exploration with invariant representation learning networks.

Biomed Signal Process Control 2021 May 5;67. Epub 2021 Mar 5.

Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA.

During daily activities, humans use their hands to grasp surrounding objects and perceive sensory information which are also employed for perceptual and motor goals. Multiple cortical brain regions are known to be responsible for sensory recognition, perception and motor execution during sensorimotor processing. While various research studies particularly focus on the domain of human sensorimotor control, the relation and processing between motor execution and sensory processing is not yet fully understood. Main goal of our work is to discriminate textured surfaces varying in their roughness levels during active tactile exploration using simultaneously recorded electroencephalogram (EEG) data, while minimizing the variance of distinct motor exploration movement patterns. We perform an experimental study with eight healthy participants who were instructed to use the tip of their dominant hand index finger while rubbing or tapping three different textured surfaces with varying levels of roughness. We use an adversarial invariant representation learning neural network architecture that performs EEG-based classification of different textured surfaces, while simultaneously minimizing the discriminability of motor movement conditions (i.e., rub or tap). Results show that the proposed approach can discriminate between three different textured surfaces with accuracies up to 70%, while suppressing movement related variability from learned representations.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bspc.2021.102507DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8078850PMC
May 2021

Universal Physiological Representation Learning With Soft-Disentangled Rateless Autoencoders.

IEEE J Biomed Health Inform 2021 08 5;25(8):2928-2937. Epub 2021 Aug 5.

Human computer interaction (HCI) involves a multidisciplinary fusion of technologies, through which the control of external devices could be achieved by monitoring physiological status of users. However, physiological biosignals often vary across users and recording sessions due to unstable physical/mental conditions and task-irrelevant activities. To deal with this challenge, we propose a method of adversarial feature encoding with the concept of a Rateless Autoencoder (RAE), in order to exploit disentangled, nuisance-robust, and universal representations. We achieve a good trade-off between user-specific and task-relevant features by making use of the stochastic disentanglement of the latent representations by adopting additional adversarial networks. The proposed model is applicable to a wider range of unknown users and tasks as well as different classifiers. Results on cross-subject transfer evaluations show the advantages of the proposed framework, with up to an 11.6% improvement in the average subject-transfer classification accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JBHI.2021.3062335DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8359927PMC
August 2021

Synergistic Activation Patterns of Hand Muscles in Left-and Right-Hand Dominant Individuals.

J Hum Kinet 2021 Jan 29;76:89-100. Epub 2021 Jan 29.

Department of Physical Therapy, Movement and Rehabilitation Sciences, Northeastern University, Boston, MA, USA.

Handedness has been associated with behavioral asymmetries between limbs that suggest specialized function of dominant and non-dominant hand. Whether patterns of muscle co-activation, representing muscle synergies, also differ between the limbs remains an open question. Previous investigations of proximal upper limb muscle synergies have reported little evidence of limb asymmetry; however, whether the same is true of the distal upper limb and hand remains unknown. This study compared forearm and hand muscle synergies between the dominant and non-dominant limb of left-handed and right-handed participants. Participants formed their hands into the postures of the American Sign Language (ASL) alphabet, while EMG was recorded from hand and forearm muscles. Muscle synergies were extracted for each limb individually by applying non-negative-matrix-factorization (NMF). Extracted synergies were compared between limbs for each individual, and between individuals to assess within and across participant differences. Results indicate no difference between the limbs for individuals, but differences in limb synergies at the population level. Left limb synergies were found to be more similar than right limb synergies across left- and right-handed individuals. Synergies of the left hand of left dominant individuals were found to have greater population level similarity than the other limbs tested. Results are interpreted with respect to known differences in the neuroanatomy and neurophysiology of proximal and distal upper limb motor control. Implications for skill training in sports requiring dexterous control of the hand are discussed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2478/hukin-2021-0002DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7877284PMC
January 2021

SSVEP BCI and Eye Tracking Use by Individuals With Late-Stage ALS and Visual Impairments.

Front Hum Neurosci 2020 20;14:595890. Epub 2020 Nov 20.

Consortium for Accessible Multimodal Brain-Body Interfaces (CAMBI), Portland, OR, United States.

Access to communication is critical for individuals with late-stage amyotrophic lateral sclerosis (ALS) and minimal volitional movement, but they sometimes present with concomitant visual or ocular motility impairments that affect their performance with eye tracking or visual brain-computer interface (BCI) systems. In this study, we explored the use of modified eye tracking and steady state visual evoked potential (SSVEP) BCI, in combination with the Shuffle Speller typing interface, for this population. Two participants with late-stage ALS, visual impairments, and minimal volitional movement completed a single-case experimental research design comparing copy-spelling performance with three different typing systems: (1) commercially available eye tracking communication software, (2) Shuffle Speller with modified eye tracking, and (3) Shuffle Speller with SSVEP BCI. Participant 1 was unable to type any correct characters with the commercial system, but achieved accuracies of up to 50% with Shuffle Speller eye tracking and 89% with Shuffle Speller BCI. Participant 2 also had higher maximum accuracies with Shuffle Speller, typing with up to 63% accuracy with eye tracking and 100% accuracy with BCI. However, participants' typing accuracy for both Shuffle Speller conditions was highly variable, particularly in the BCI condition. Both the Shuffle Speller interface and SSVEP BCI input show promise for improving typing performance for people with late-stage ALS. Further development of innovative BCI systems for this population is needed.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fnhum.2020.595890DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7715037PMC
November 2020

HANDS: a multimodal dataset for modeling toward human grasp intent inference in prosthetic hands.

Intell Serv Robot 2020 Jan 25;13(1):179-185. Epub 2019 Sep 25.

360 Huntington Ave, Boston, MA 02120.

Upper limb and hand functionality is critical to many activities of daily living and the amputation of one can lead to significant functionality loss for individuals. From this perspective, advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user, but more importantly from the improved capability to infer human intent from multimodal sensor data to provide the robotic hand perception abilities regarding the operational context. Such multimodal sensor data may include various environment sensors including vision, as well as human physiology and behavior sensors including electromyography and inertial measurement units. A fusion methodology for environmental state and human intent estimation can combine these sources of evidence in order to help prosthetic hand motion planning and control. In this paper, we present a dataset of this type that was gathered with the anticipation of cameras being built into prosthetic hands, and computer vision methods will need to assess this hand-view visual evidence in order to estimate human intent. Specifically, paired images from human eye-view and hand-view of various objects placed at different orientations have been captured at the initial state of grasping trials, followed by paired video, EMG and IMU from the arm of the human during a grasp, lift, put-down, and retract style trial structure. For each trial, based on eye-view images of the scene showing the hand and object on a table, multiple humans were asked to sort in decreasing order of preference, five grasp types appropriate for the object in its given configuration relative to the hand. The potential utility of paired eye-view and hand-view images was illustrated by training a convolutional neural network to process hand-view images in order to predict eye-view labels assigned by humans.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11370-019-00293-8DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7728160PMC
January 2020

EEG-based trial-by-trial texture classification during active touch.

Sci Rep 2020 11 27;10(1):20755. Epub 2020 Nov 27.

Electrical and Computer Engineering Department, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA.

Trial-by-trial texture classification analysis and identifying salient texture related EEG features during active touch that are minimally influenced by movement type and frequency conditions are the main contributions of this work. A total of twelve healthy subjects were recruited. Each subject was instructed to use the fingertip of their dominant hand's index finger to rub or tap three textured surfaces (smooth flat, medium rough, and rough) with three levels of movement frequency (approximately 2, 1 and 0.5 Hz). EEG and force data were collected synchronously during each touch condition. A systematic feature selection process was performed to select temporal and spectral EEG features that contribute to texture classification but have low contribution towards movement type and frequency classification. A tenfold cross validation was used to train two 3-class (each for texture and movement frequency classification) and a 2-class (movement type) Support Vector Machine classifiers. Our results showed that the total power in the mu (8-15 Hz) and beta (16-30 Hz) frequency bands showed high accuracy in discriminating among textures with different levels of roughness (average accuracy > 84%) but lower contribution towards movement type (average accuracy < 65%) and frequency (average accuracy < 58%) classification.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-77439-7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7699648PMC
November 2020

Comparing supervised and unsupervised approaches to emotion categorization in the human brain, body, and subjective experience.

Sci Rep 2020 11 20;10(1):20284. Epub 2020 Nov 20.

Department of Psychology, College of Science, Northeastern University, Boston, MA, USA.

Machine learning methods provide powerful tools to map physical measurements to scientific categories. But are such methods suitable for discovering the ground truth about psychological categories? We use the science of emotion as a test case to explore this question. In studies of emotion, researchers use supervised classifiers, guided by emotion labels, to attempt to discover biomarkers in the brain or body for the corresponding emotion categories. This practice relies on the assumption that the labels refer to objective categories that can be discovered. Here, we critically examine this approach across three distinct datasets collected during emotional episodes-measuring the human brain, body, and subjective experience-and compare supervised classification solutions with those from unsupervised clustering in which no labels are assigned to the data. We conclude with a set of recommendations to guide researchers towards meaningful, data-driven discoveries in the science of emotion and beyond.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-77117-8DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7679385PMC
November 2020

A Muscle Synergy Framework for Cross-Limb Reconstruction of Hand Muscle Activity Distal to a Virtual Wrist-Level Disarticulation.

Annu Int Conf IEEE Eng Med Biol Soc 2020 07;2020:3285-3288

Currently, myoelectric prostheses lack dexterity and ease of control, in part because of inadequate schemes to extract relevant muscle features that can approximate muscle activation patterns that enable individuated dexterous finger motion. This project seeks to apply a novel algorithm pipeline that extracts muscle activation patterns from one limb, as well as from forearm muscles of the opposite limb, to predict muscle activation data of opposite limb intrinsic hand muscles, with the long-range goal of informing dexterous prosthetic control.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC44109.2020.9175939DOI Listing
July 2020

Disentangled Adversarial Transfer Learning for Physiological Biosignals.

Annu Int Conf IEEE Eng Med Biol Soc 2020 07;2020:422-425

Recent developments in wearable sensors demonstrate promising results for monitoring physiological status in effective and comfortable ways. One major challenge of physiological status assessment is the problem of transfer learning caused by the domain inconsistency of biosignals across users or different recording sessions from the same user. We propose an adversarial inference approach for transfer learning to extract disentangled nuisance-robust representations from physiological biosignal data in stress status level assessment. We exploit the trade-off between task-related features and person-discriminative information by using both an adversary network and a nuisance network to jointly manipulate and disentangle the learned latent representations by the encoder, which are then input to a discriminative classifier. Results on cross-subjects transfer evaluations demonstrate the benefits of the proposed adversarial framework, and thus show its capabilities to adapt to a broader range of subjects. Finally we highlight that our proposed adversarial transfer learning approach is also applicable to other deep feature learning frameworks.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC44109.2020.9175233DOI Listing
July 2020

Motor Cortex Mapping using Active Gaussian Processes.

Int Conf Pervasive Technol Relat Assist Environ 2020 Jun;2020

ECE, Northeastern University, Boston, Massachusetts.

One important application of transcranial magnetic stimulation (TMS) is to map cortical motor topography by spatially sampling the motor cortex, and recording motor evoked potentials (MEP) with surface electromyography. Standard approaches to TMS mapping involve repetitive stimulations at different loci spaced on a (typically 1 cm) grid on the scalp. These mappings strategies are time consuming and responsive sites are typically sparse. Furthermore, the long time scale prevents measurement of transient cortical changes, and is poorly tolerated in clinical populations. An alternative approach involves using the TMS mapper expertise to exploit the map's sparsity through the use of feedback of MEPs to decide which loci to stimulate. In this investigation, we propose a novel active learning method to automatically infer optimal future stimulus loci in place of user expertise. Specifically, we propose an active Gaussian Process (GP) strategy with loci selection criteria such as entropy and mutual information (MI). The proposed method twists the usual entropy- and MI-based selection criteria by modeling the estimated MEP field, i.e., the GP mean, as a Gaussian random variable itself. By doing so, we include MEP amplitudes in the loci selection criteria which would be otherwise completely independent of the MEP values. Experimental results using real data shows that the proposed strategy can greatly outperform competing methods when the MEP variations are mostly conned in a sub-region of the space.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1145/3389189.3389202DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7433704PMC
June 2020

Mapping Motor Cortex Stimulation to Muscle Responses: A Deep Neural Network Modeling Approach.

Int Conf Pervasive Technol Relat Assist Environ 2020 Jun;2020

Northeastern University, Boston, MA, USA.

A deep neural network (DNN) that can reliably model muscle responses from corresponding brain stimulation has the potential to increase knowledge of coordinated motor control for numerous basic science and applied use cases. Such cases include the understanding of abnormal movement patterns due to neurological injury from stroke, and stimulation based interventions for neurological recovery such as paired associative stimulation. In this work, potential DNN models are explored and the one with the minimum squared errors is recommended for the optimal performance of the M2M-Net, a network that maps transcranial magnetic stimulation of the motor cortex to corresponding muscle responses, using: a finite element simulation, an empirical neural response profile, a convolutional autoencoder, a separate deep network mapper, and recordings of multi-muscle activation. We discuss the rationale behind the different modeling approaches and architectures, and contrast their results. Additionally, to obtain a comparative insight of the trade-o between complexity and performance analysis, we explore different techniques, including the extension of two classical information criteria for M2M-Net. Finally, we find that the model analogous to mapping the motor cortex stimulation to a combination of direct and synergistic connection to the muscles performs the best, when the neural response profile is used at the input.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1145/3389189.3389203DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7430758PMC
June 2020

Plus Disease in Retinopathy of Prematurity: Convolutional Neural Network Performance Using a Combined Neural Network and Feature Extraction Approach.

Transl Vis Sci Technol 2020 02 14;9(2):10. Epub 2020 Feb 14.

Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.

Purpose: Retinopathy of prematurity (ROP), a leading cause of childhood blindness, is diagnosed by clinical ophthalmoscopic examinations or reading retinal images. Plus disease, defined as abnormal tortuosity and dilation of the posterior retinal blood vessels, is the most important feature to determine treatment-requiring ROP. We aimed to create a complete, publicly available and feature-extraction-based pipeline, I-ROP ASSIST, that achieves convolutional neural network (CNN)-like performance when diagnosing plus disease from retinal images.

Methods: We developed two datasets containing 100 and 5512 posterior retinal images, respectively. After segmenting retinal vessels, we detected the vessel centerlines. Then, we extracted features relevant to ROP, including tortuosity and dilation measures, and used these features in the classifiers including logistic regression, support vector machine and neural networks to assess a severity score for the input. We tested our system with fivefold cross-validation and calculated the area under the curve (AUC) metric for each classifier and dataset.

Results: For predicting plus versus not-plus categories, we achieved 99% and 94% AUC on the first and second datasets, respectively. For predicting pre-plus or worse versus normal categories, we achieved 99% and 88% AUC on the first and second datasets, respectively. The CNN method achieved 98% and 94% for predicting two categories on the second dataset.

Conclusions: Our system combining automatic retinal vessel segmentation, tracing, feature extraction and classification is able to diagnose plus disease in ROP with CNN-like performance.

Translational Relevance: The high performance of I-ROP ASSIST suggests potential applications in automated and objective diagnosis of plus disease.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1167/tvst.9.2.10DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7346878PMC
February 2020

Siamese neural networks for continuous disease severity evaluation and change detection in medical imaging.

NPJ Digit Med 2020 26;3:48. Epub 2020 Mar 26.

1Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA USA.

Using medical images to evaluate disease severity and change over time is a routine and important task in clinical decision making. Grading systems are often used, but are unreliable as domain experts disagree on disease severity category thresholds. These discrete categories also do not reflect the underlying continuous spectrum of disease severity. To address these issues, we developed a convolutional Siamese neural network approach to evaluate disease severity at single time points and change between longitudinal patient visits on a continuous spectrum. We demonstrate this in two medical imaging domains: retinopathy of prematurity (ROP) in retinal photographs and osteoarthritis in knee radiographs. Our patient cohorts consist of 4861 images from 870 patients in the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) cohort study and 10,012 images from 3021 patients in the Multicenter Osteoarthritis Study (MOST), both of which feature longitudinal imaging data. Multiple expert clinician raters ranked 100 retinal images and 100 knee radiographs from excluded test sets for severity of ROP and osteoarthritis, respectively. The Siamese neural network output for each image in comparison to a pool of normal reference images correlates with disease severity rank ( = 0.87 for ROP and  = 0.89 for osteoarthritis), both within and between the clinical grading categories. Thus, this output can represent the continuous spectrum of disease severity at any single time point. The difference in these outputs can be used to show change over time. Alternatively, paired images from the same patient at two time points can be directly compared using the Siamese neural network, resulting in an additional continuous measure of change between images. Importantly, our approach does not require manual localization of the pathology of interest and requires only a binary label for training (same versus different). The location of disease and site of change detected by the algorithm can be visualized using an occlusion sensitivity map-based approach. For a longitudinal binary change detection task, our Siamese neural networks achieve test set receiving operator characteristic area under the curves (AUCs) of up to 0.90 in evaluating ROP or knee osteoarthritis change, depending on the change detection strategy. The overall performance on this binary task is similar compared to a conventional convolutional deep-neural network trained for multi-class classification. Our results demonstrate that convolutional Siamese neural networks can be a powerful tool for evaluating the continuous spectrum of disease severity and change in medical imaging.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41746-020-0255-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7099081PMC
March 2020

Learning Invariant Representations from EEG via Adversarial Inference.

IEEE Access 2020 4;8:27074-27085. Epub 2020 Feb 4.

Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115, USA.

Discovering and exploiting shared, invariant neural activity in electroencephalogram (EEG) based classification tasks is of significant interest for generalizability of decoding models across subjects or EEG recording sessions. While deep neural networks are recently emerging as generic EEG feature extractors, this transfer learning aspect usually relies on the prior assumption that deep networks naturally behave as subject- (or session-) invariant EEG feature extractors. We propose a further step towards invariance of EEG deep learning frameworks in a systemic way during model training. We introduce an adversarial inference approach to learn representations that are invariant to inter-subject variabilities within a discriminative setting. We perform experimental studies using a publicly available motor imagery EEG dataset, and state-of-the-art convolutional neural network based EEG decoding models within the proposed adversarial learning framework. We present our results in cross-subject model transfer scenarios, demonstrate neurophysiological interpretations of the learned networks, and discuss potential insights offered by adversarial inference to the growing field of deep learning for EEG.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/access.2020.2971600DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7971154PMC
February 2020

Disentangled Adversarial Autoencoder for Subject-Invariant Physiological Feature Extraction.

IEEE Signal Process Lett 2020 31;27:1565-1569. Epub 2020 Aug 31.

Cognitive Systems Laboratory, Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115, USA.

Recent developments in biosignal processing have enabled users to exploit their physiological status for manipulating devices in a reliable and safe manner. One major challenge of physiological sensing lies in the variability of biosignals across different users and tasks. To address this issue, we propose an adversarial feature extractor for transfer learning to exploit disentangled universal representations. We consider the trade-off between task-relevant features and user-discriminative information by introducing additional adversary and nuisance networks in order to manipulate the latent representations such that the learned feature extractor is applicable to unknown users and various tasks. Results on cross-subject transfer evaluations exhibit the benefits of the proposed framework, with up to 8.8% improvement in average accuracy of classification, and demonstrate adaptability to a broader range of subjects.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/lsp.2020.3020215DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7977990PMC
August 2020

Robust Fusion of c-VEP and Gaze.

IEEE Sens Lett 2019 Jan 30;3(1). Epub 2018 Oct 30.

The authors are with the Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA.

Brain computer interfaces (BCIs) are one of the developing technologies, serving as a communication interface for people with neuromuscular disorders. Electroencephalography (EEG) and gaze signals are among the commonly used inputs for the user intent classification problem arising in BCIs. Fusing different types of input modalities, i.e. EEG and gaze, is an obvious but effective solution for achieving high performance on this problem. Even though there are some simplistic approaches for fusing these two evidences, a more effective method is required for classification performances and speeds suitable for real-life scenarios. One of the main problems that is left unrecognized is highly noisy real-life data. In the context of the BCI framework utilized in this work, noisy data stem from user error in the form of tracking a nontarget stimuli, which in turn results in misleading EEG and gaze signals. We propose a method for fusing aforementioned evidences in a probabilistic manner that is highly robust against noisy data. We show the performance of the proposed method on real EEG and gaze data for different configurations of noise control variables. Compared to the regular fusion method, robust method achieves up to 15% higher classification accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/LSENS.2018.2878705DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6927474PMC
January 2019

Adversarial Deep Learning in EEG Biometrics.

IEEE Signal Process Lett 2019 May 27;26(5):710-714. Epub 2019 Mar 27.

Cognitive Systems Laboratory at Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA.

Deep learning methods for person identification based on electroencephalographic (EEG) brain activity encounters the problem of exploiting the temporally correlated structures or recording session specific variability within EEG. Furthermore, recent methods have mostly trained and evaluated based on single session EEG data. We address this problem from an invariant representation learning perspective. We propose an adversarial inference approach to extend such deep learning models to learn session-invariant person-discriminative representations that can provide robustness in terms of longitudinal usability. Using adversarial learning within a deep convolutional network, we empirically assess and show improvements with our approach based on longitudinally collected EEG data for person identification from half-second EEG epochs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/LSP.2019.2906826DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6897355PMC
May 2019

Optimal Query Selection Using Multi-Armed Bandits.

IEEE Signal Process Lett 2018 Dec 26;25(12):1870-1874. Epub 2018 Oct 26.

Northeastern University.

Query selection for latent variable estimation is conventionally performed by opting for observations with low noise or optimizing information theoretic objectives related to reducing the level of estimated uncertainty based on the current best estimate. In these approaches, typically the system makes a decision by leveraging the current available information about the state. However, trusting the current best estimate results in poor query selection when truth is far from the current estimate, and this negatively impacts the speed and accuracy of the latent variable estimation procedure. We introduce a novel sequential adaptive action value function for query selection using the multi-armed bandit (MAB) framework which allows us to find a tractable solution. For this adaptive-sequential query selection method, we analytically show: (i) performance improvement in the query selection for a dynamical system, (ii) the conditions where the model outperforms competitors. We also present favorable empirical assessments of the performance for this method, compared to alternative methods, both using Monte Carlo simulations and human-in-the-loop experiments with a brain computer interface (BCI) typing system where the language model provides the prior information.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/LSP.2018.2878066DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6777547PMC
December 2018

Asymmetric Loss Functions and Deep Densely Connected Networks for Highly Imbalanced Medical Image Segmentation: Application to Multiple Sclerosis Lesion Detection.

IEEE Access 2019 12;7:721-1735. Epub 2018 Dec 12.

Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115.

Fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. One of the major challenges in training such networks raises when data is unbalanced, which is common in many medical imaging applications such as lesion segmentation where lesion class voxels are often much lower in numbers than non-lesion voxels. A trained network with unbalanced data may make predictions with high precision and low recall, being severely biased towards the non-lesion class which is particularly undesired in most medical applications where false negatives are actually more important than false positives. Various methods have been proposed to address this problem including two step training, sample re-weighting, balanced sampling, and more recently similarity loss functions, and focal loss. In this work we trained fully convolutional deep neural networks using an asymmetric similarity loss function to mitigate the issue of data imbalance and achieve much better trade-off between precision and recall. To this end, we developed a 3D fully convolutional densely connected network (FC-DenseNet) with large overlapping image patches as input and an asymmetric similarity loss layer based on Tversky index (using scores). We used large overlapping image patches as inputs for intrinsic and extrinsic data augmentation, a patch selection algorithm, and a patch prediction fusion strategy using B-spline weighted soft voting to account for the uncertainty of prediction in patch borders. We applied this method to multiple sclerosis (MS) lesion segmentation based on two different datasets of MSSEG 2016 and ISBI longitudinal MS lesion segmentation challenge, where we achieved average Dice similarity coefficients of 69.9% and 65.74%, respectively, achieving top performance in both challenges. We compared the performance of our network trained with loss, focal loss, and generalized Dice loss (GDL) functions. Through September 2018 our network trained with focal loss ranked first according to the ISBI challenge overall score and resulted in the lowest reported lesion false positive rate among all submitted methods. Our network trained with the asymmetric similarity loss led to the lowest surface distance and the best lesion true positive rate that is arguably the most important performance metric in a clinical decision support system for lesion detection. The asymmetric similarity loss function based on scores allows training networks that make a better balance between precision and recall in highly unbalanced image segmentation. We achieved superior performance in MS lesion segmentation using a patchwise 3D FC-DenseNet with a patch prediction fusion strategy, trained with asymmetric similarity loss functions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/ACCESS.2018.2886371DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6746414PMC
December 2018

Monitoring Disease Progression With a Quantitative Severity Scale for Retinopathy of Prematurity Using Deep Learning.

JAMA Ophthalmol 2019 Jul 3. Epub 2019 Jul 3.

Department of Ophthalmology, Casey Eye institute, Oregon Health & Science University, Portland.

Importance: Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide, but clinical diagnosis is subjective and qualitative.

Objective: To describe a quantitative ROP severity score derived using a deep learning algorithm designed to evaluate plus disease and to assess its utility for objectively monitoring ROP progression.

Design, Setting, And Participants: This retrospective cohort study included images from 5255 clinical examinations of 871 premature infants who met the ROP screening criteria of the Imaging and Informatics in ROP (i-ROP) Consortium, which comprises 9 tertiary care centers in North America, from July 1, 2011, to December 31, 2016. Data analysis was performed from July 2017 to May 2018.

Exposure: A deep learning algorithm was used to assign a continuous ROP vascular severity score from 1 (most normal) to 9 (most severe) at each examination based on a single posterior photograph compared with a reference standard diagnosis (RSD) simplified into 4 categories: no ROP, mild ROP, type 2 ROP or pre-plus disease, or type 1 ROP. Disease course was assessed longitudinally across multiple examinations for all patients.

Main Outcomes And Measures: Mean ROP vascular severity score progression over time compared with the RSD.

Results: A total of 5255 clinical examinations from 871 infants (mean [SD] gestational age, 27.0 [2.0] weeks; 493 [56.6%] male; mean [SD] birth weight, 949 [271] g) were analyzed. The median severity scores for each category were as follows: 1.1 (interquartile range [IQR], 1.0-1.5) (no ROP), 1.5 (IQR, 1.1-3.4) (mild ROP), 4.6 (IQR, 2.4-5.3) (type 2 and pre-plus), and 7.5 (IQR, 5.0-8.7) (treatment-requiring ROP) (P < .001). When the long-term differences in the median severity scores across time between the eyes progressing to treatment and those who did not eventually require treatment were compared, the median score was higher in the treatment group by 0.06 at 30 to 32 weeks, 0.75 at 32 to 34 weeks, 3.56 at 34 to 36 weeks, 3.71 at 36 to 38 weeks, and 3.24 at 38 to 40 weeks postmenstrual age (P < .001 for all comparisons).

Conclusions And Relevance: The findings suggest that the proposed ROP vascular severity score is associated with category of disease at a given point in time and clinical progression of ROP in premature infants. Automated image analysis may be used to quantify clinical disease progression and identify infants at high risk for eventually developing treatment-requiring ROP. This finding has implications for quality and delivery of ROP care and for future approaches to disease classification.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamaophthalmol.2019.2433DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6613341PMC
July 2019

A Quantitative Severity Scale for Retinopathy of Prematurity Using Deep Learning to Monitor Disease Regression After Treatment.

JAMA Ophthalmol 2019 Jul 3. Epub 2019 Jul 3.

Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland.

Importance: Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide, but treatment failure and disease recurrence are important causes of adverse outcomes in patients with treatment-requiring ROP (TR-ROP).

Objectives: To apply an automated ROP vascular severity score obtained using a deep learning algorithm and to assess its utility for objectively monitoring ROP regression after treatment.

Design, Setting, And Participants: This retrospective cohort study used data from the Imaging and Informatics in ROP consortium, which comprises 9 tertiary referral centers in North America that screen high volumes of at-risk infants for ROP. Images of 5255 clinical eye examinations from 871 infants performed between July 2011 and December 2016 were assessed for eligibility in the present study. The disease course was assessed with time across the numerous examinations for patients with TR-ROP. Infants born prematurely meeting screening criteria for ROP who developed TR-ROP and who had images captured within 4 weeks before and after treatment as well as at the time of treatment were included.

Main Outcomes And Measures: The primary outcome was mean (SD) ROP vascular severity score before, at time of, and after treatment. A deep learning classifier was used to assign a continuous ROP vascular severity score, which ranged from 1 (normal) to 9 (most severe), at each examination. A secondary outcome was the difference in ROP vascular severity score among eyes treated with laser or the vascular endothelial growth factor antagonist bevacizumab. Differences between groups for both outcomes were assessed using unpaired 2-tailed t tests with Bonferroni correction.

Results: Of 5255 examined eyes, 91 developed TR-ROP, of which 46 eyes met the inclusion criteria based on the available images. The mean (SD) birth weight of those patients was 653 (185) g, with a mean (SD) gestational age of 24.9 (1.3) weeks. The mean (SD) ROP vascular severity scores significantly increased 2 weeks prior to treatment (4.19 [1.75]), peaked at treatment (7.43 [1.89]), and decreased for at least 2 weeks after treatment (4.00 [1.88]) (all P < .001). Eyes requiring retreatment with laser had higher ROP vascular severity scores at the time of initial treatment compared with eyes receiving a single treatment (P < .001).

Conclusions And Relevance: This quantitative ROP vascular severity score appears to consistently reflect clinical disease progression and posttreatment regression in eyes with TR-ROP. These study results may have implications for the monitoring of patients with ROP for treatment failure and disease recurrence and for determining the appropriate level of disease severity for primary treatment in eyes with aggressive disease.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamaophthalmol.2019.2442DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6613298PMC
July 2019

Classification and comparison via neural networks.

Neural Netw 2019 Oct 19;118:65-80. Epub 2019 Jun 19.

Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Avenue, 409 Dana, Boston, MA 02115, USA.

We consider learning from comparison labels generated as follows: given two samples in a dataset, a labeler produces a label indicating their relative order. Such comparison labels scale quadratically with the dataset size; most importantly, in practice, they often exhibit lower variance compared to class labels. We propose a new neural network architecture based on siamese networks to incorporate both class and comparison labels in the same training pipeline, using Bradley-Terry and Thurstone loss functions. Our architecture leads to a significant improvement in predicting both class and comparison labels, increasing classification AUC by as much as 35% and comparison AUC by as much as 6% on several real-life datasets. We further show that, by incorporating comparisons, training from few samples becomes possible: a deep neural network of 5.9 million parameters trained on 80 images attains a 0.92 AUC when incorporating comparisons.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2019.06.004DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6718310PMC
October 2019

Predicting aggression to others in youth with autism using a wearable biosensor.

Autism Res 2019 08 21;12(8):1286-1296. Epub 2019 Jun 21.

Maine Medical Center Research Institute, Portland, Maine.

Unpredictable and potentially dangerous aggressive behavior by youth with Autism Spectrum Disorder (ASD) can isolate them from foundational educational, social, and familial activities, thereby markedly exacerbating morbidity and costs associated with ASD. This study investigates whether preceding physiological and motion data measured by a wrist-worn biosensor can predict aggression to others by youth with ASD. We recorded peripheral physiological (cardiovascular and electrodermal activity) and motion (accelerometry) signals from a biosensor worn by 20 youth with ASD (ages 6-17 years, 75% male, 85% minimally verbal) during 69 independent naturalistic observation sessions with concurrent behavioral coding in a specialized inpatient psychiatry unit. We developed prediction models based on ridge-regularized logistic regression. Our results suggest that aggression to others can be predicted 1 min before it occurs using 3 min of prior biosensor data with an average area under the curve of 0.71 for a global model and 0.84 for person-dependent models. The biosensor was well tolerated, we obtained useable data in all cases, and no users withdrew from the study. Relatively high predictive accuracy was achieved using antecedent physiological and motion data. Larger trials are needed to further establish an ideal ratio of measurement density to predictive accuracy and reliability. These findings lay the groundwork for the future development of precursor behavior analysis and just-in-time adaptive intervention systems to prevent or mitigate the emergence, occurrence, and impact of aggression in ASD. Autism Res 2019, 12: 1286-1296. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Unpredictable aggression can create a barrier to accessing community, therapeutic, medical, and educational services. The present study evaluated whether data from a wearable biosensor can be used to predict aggression to others by youth with autism spectrum disorder (ASD). Results demonstrate that aggression to others can be predicted 1 min before it occurs with high accuracy, laying the groundwork for the future development of preemptive behavioral interventions and just-in-time adaptive intervention systems to prevent or mitigate the emergence, occurrence, and impact of aggression to others in ASD.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/aur.2151DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6988899PMC
August 2019

IMPROVED CLASSIFICATION IN TACTILE BCIS USING A NOISY LABEL MODEL.

Proc IEEE Int Symp Biomed Imaging 2018 Apr 24;2018:757-761. Epub 2018 May 24.

Cognitive Systems Laboratory, ECE Department, Northeastern University.

Tactile BCIs have gained recent popularity in the BCI community due to the advantages of using a stimulation medium which does not inhibit the users visual or auditory senses, is naturally inconspicuous, and can still be used by a person who may be visually or auditorily impaired. While many systems have been proposed which utilize the P300 response elicited through an oddball task, these systems struggle to classify user responses with accuracies comparable to many visual stimulus based systems. In this study, we model the tactile ERP generation as label noise and develop a novel BCI paradigm for binary communication designed to minimize label confusion. The classification model is based on a modified Gaussian mixture and trained using expectation maximization (EM). Finally, we show after testing on multiple subjects that this approach yields cross-validated accuracies for all users which are significantly above chance and suggests that such an approach is robust and reliable for a variety of binary communication-based applications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/ISBI.2018.8363683DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6525616PMC
April 2018

INCORPORATING TEMPORAL DEPENDENCY ON ERP BASED BCI.

Proc IEEE Int Symp Biomed Imaging 2018 Apr 24;2018:752-756. Epub 2018 May 24.

Cognitive Systems Laboratory, ECE Department, Northeastern University.

In brain computer interface (BCI) systems based on event related potentials (ERPs), a windowed electroencephalography (EEG) signal is taken into consideration for the assumed duration of the ERP potential. In BCI applications inter stimuli interval is shorter than the ERP duration. This causes temporal dependencies over observation potentials thus disallows taking the data into consideration independently. However, conventionally the data is assumed to be independent for decreasing complexity. In this paper we propose a graphical model which covers the temporal dependency into consideration by labeling each time sample. We also propose a formulation to exploit the time series structure of the EEG.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/ISBI.2018.8363682DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6525617PMC
April 2018
-->