Publications by authors named "Andreas Maier"

248 Publications

Glaucoma classification in 3 x 3 mm en face macular scans using deep learning in a different plexus.

Biomed Opt Express 2021 Dec 9;12(12):7434-7444. Epub 2021 Nov 9.

Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.

Glaucoma is among the leading causes of irreversible blindness worldwide. If diagnosed and treated early enough, the disease progression can be stopped or slowed down. Therefore, it would be very valuable to detect early stages of glaucoma, which are mostly asymptomatic, by broad screening. This study examines different computational features that can be automatically deduced from images and their performance on the classification task of differentiating glaucoma patients and healthy controls. Data used for this study are 3 x 3 mm en face optical coherence tomography angiography (OCTA) images of different retinal projections (of the whole retina, the superficial vascular plexus (SVP), the intermediate capillary plexus (ICP) and the deep capillary plexus (DCP)) centered around the fovea. Our results show quantitatively that the automatically extracted features from convolutional neural networks (CNNs) perform similarly well or better than handcrafted ones when used to distinguish glaucoma patients from healthy controls. On the whole retina projection and the SVP projection, CNNs outperform the handcrafted features presented in the literature. Area under receiver operating characteristics (AUROC) on the SVP projection is 0.967, which is comparable to the best reported values in the literature. This is achieved despite using the small 3 × 3 mm field of view, which has been reported as disadvantageous for handcrafted vessel density features in previous works. A detailed analysis of our CNN method, using attention maps, suggests that this performance increase can be partially explained by the CNN automatically relying more on areas of higher relevance for feature extraction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/BOE.439991DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8713669PMC
December 2021

Deep learning-based extended field of view computed tomography image reconstruction: influence of network design on image estimation outside the scan field of view.

Biomed Phys Eng Express 2022 Jan 5. Epub 2022 Jan 5.

Friedrich-Alexander-Universitat Erlangen-Nurnberg, Martensstr. 3, Erlangen, Bayern, 91058, GERMANY.

The problem of data truncation in Computed Tomography (CT) is caused by the missing data when the patient exceeds the Scan Field of View (SFOV) of a CT scanner. The reconstruction of a truncated scan produces severe truncation artifacts both inside and outside the SFOV. We have employed a deep learning-based approach to extend the field of view and suppress truncation artifacts. Thereby, our aim is to generate a good estimate of the real patient data and not to provide a perfect and diagnostic image even in regions beyond the SFOV of the CT scanner. This estimate could then be used as an input to higher order reconstruction algorithms [1]. To evaluate the influence of the network structure and layout on the results, three convolutional neural networks (CNNs), in particular a general CNN called ConvNet, an autoencoder, and the U-Net architecture have been investigated in this paper. Additionally, the impact of L1, L2, structural dissimilarity and perceptual loss functions on the neural network's learning have been assessed and evaluated. The evaluation of data set comprising 12 truncated test patients demonstrated that the U-Net in combination with the structural dissimilarity loss showed the best performance in terms of image restoration in regions beyond the SFOV of the CT scanner. Moreover, this network produced the best mean absolute error, L1, L2, and structural dissimilarity evaluation measures on the test set compared to other applied networks. Therefore, it is possible to achieve truncation artifact removal using deep learning techniques.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1088/2057-1976/ac47fcDOI Listing
January 2022

Computer-assisted mitotic count using a deep learning-based algorithm improves interobserver reproducibility and accuracy.

Vet Pathol 2021 Dec 30:3009858211067478. Epub 2021 Dec 30.

Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.

The mitotic count (MC) is an important histological parameter for prognostication of malignant neoplasms. However, it has inter- and intraobserver discrepancies due to difficulties in selecting the region of interest (MC-ROI) and in identifying or classifying mitotic figures (MFs). Recent progress in the field of artificial intelligence has allowed the development of high-performance algorithms that may improve standardization of the MC. As algorithmic predictions are not flawless, computer-assisted review by pathologists may ensure reliability. In the present study, we compared partial (MC-ROI preselection) and full (additional visualization of MF candidates and display of algorithmic confidence values) computer-assisted MC analysis to the routine (unaided) MC analysis by 23 pathologists for whole-slide images of 50 canine cutaneous mast cell tumors (ccMCTs). Algorithmic predictions aimed to assist pathologists in detecting mitotic hotspot locations, reducing omission of MFs, and improving classification against imposters. The interobserver consistency for the MC significantly increased with computer assistance (interobserver correlation coefficient, ICC = 0.92) compared to the unaided approach (ICC = 0.70). Classification into prognostic stratifications had a higher accuracy with computer assistance. The algorithmically preselected hotspot MC-ROIs had a consistently higher MCs than the manually selected MC-ROIs. Compared to a ground truth (developed with immunohistochemistry for phosphohistone H3), pathologist performance in detecting individual MF was augmented when using computer assistance (F1-score of 0.68 increased to 0.79) with a reduction in false negatives by 38%. The results of this study demonstrate that computer assistance may lead to more reproducible and accurate MCs in ccMCTs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/03009858211067478DOI Listing
December 2021

Synthetic Image Rendering Solves Annotation Problem in Deep Learning Nanoparticle Segmentation.

Small Methods 2021 Jul 3;5(7):e2100223. Epub 2021 May 3.

Institute of Optics, Information and Photonics, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany.

Nanoparticles occur in various environments as a consequence of man-made processes, which raises concerns about their impact on the environment and human health. To allow for proper risk assessment, a precise and statistically relevant analysis of particle characteristics (such as size, shape, and composition) is required that would greatly benefit from automated image analysis procedures. While deep learning shows impressive results in object detection tasks, its applicability is limited by the amount of representative, experimentally collected and manually annotated training data. Here, an elegant, flexible, and versatile method to bypass this costly and tedious data acquisition process is presented. It shows that using a rendering software allows to generate realistic, synthetic training data to train a state-of-the art deep neural network. Using this approach, a segmentation accuracy can be derived that is comparable to man-made annotations for toxicologically relevant metal-oxide nanoparticle ensembles which were chosen as examples. The presented study paves the way toward the use of deep learning for automated, high-throughput particle detection in a variety of imaging techniques such as in microscopies and spectroscopies, for a wide range of applications, including the detection of micro- and nanoplastic particles in water and tissue samples.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/smtd.202100223DOI Listing
July 2021

FIN-PRINT a fully-automated multi-stage deep-learning-based framework for the individual recognition of killer whales.

Sci Rep 2021 12 6;11(1):23480. Epub 2021 Dec 6.

Department of Computer Science - Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstr. 3, 91058, Erlangen, Germany.

Biometric identification techniques such as photo-identification require an array of unique natural markings to identify individuals. From 1975 to present, Bigg's killer whales have been photo-identified along the west coast of North America, resulting in one of the largest and longest-running cetacean photo-identification datasets. However, data maintenance and analysis are extremely time and resource consuming. This study transfers the procedure of killer whale image identification into a fully automated, multi-stage, deep learning framework, entitled FIN-PRINT. It is composed of multiple sequentially ordered sub-components. FIN-PRINT is trained and evaluated on a dataset collected over an 8-year period (2011-2018) in the coastal waters off western North America, including 121,000 human-annotated identification images of Bigg's killer whales. At first, object detection is performed to identify unique killer whale markings, resulting in 94.4% recall, 94.1% precision, and 93.4% mean-average-precision (mAP). Second, all previously identified natural killer whale markings are extracted. The third step introduces a data enhancement mechanism by filtering between valid and invalid markings from previous processing levels, achieving 92.8% recall, 97.5%, precision, and 95.2% accuracy. The fourth and final step involves multi-class individual recognition. When evaluated on the network test set, it achieved an accuracy of 92.5% with 97.2% top-3 unweighted accuracy (TUA) for the 100 most commonly photo-identified killer whales. Additionally, the method achieved an accuracy of 84.5% and a TUA of 92.9% when applied to the entire 2018 image collection of the 100 most common killer whales. The source code of FIN-PRINT can be adapted to other species and will be publicly available.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-021-02506-6DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8648837PMC
December 2021

Robust partial Fourier reconstruction for diffusion-weighted imaging using a recurrent convolutional neural network.

Magn Reson Med 2021 Nov 28. Epub 2021 Nov 28.

Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany.

Purpose: To develop an algorithm for robust partial Fourier (PF) reconstruction applicable to diffusion-weighted (DW) images with non-smooth phase variations.

Methods: Based on an unrolled proximal splitting algorithm, a neural network architecture is derived, which alternates between data consistency operations and regularization implemented by recurrent convolutions. In order to exploit correlations, multiple repetitions of the same slice are jointly reconstructed under consideration of permutation-equivariance. The algorithm is trained on DW liver data of 60 volunteers and evaluated on retrospectively and prospectively subsampled data of different anatomies and resolutions.

Results: The proposed method is able to significantly outperform conventional PF techniques on retrospectively subsampled data in terms of quantitative measures as well as perceptual image quality. In this context, joint reconstruction of repetitions as well as the particular type of recurrent network unrolling are found to be beneficial with respect to reconstruction quality. On prospectively PF-sampled data, the proposed method enables DW imaging with higher signal without sacrificing image resolution or introducing additional artifacts. Alternatively, it can be used to counter the TE increase in acquisitions with higher resolution. Furthermore, generalizability can be shown to prospective brain data exhibiting anatomies and contrasts not present in the training set.

Conclusion: This work demonstrates that robust PF reconstruction of DW data is feasible even at strong PF factors in anatomies prone to phase variations. Since the proposed method does not rely on smoothness priors of the phase but uses learned recurrent convolutions instead, artifacts of conventional PF methods can be avoided.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mrm.29100DOI Listing
November 2021

Rule-Based Models for Risk Estimation and Analysis of In-hospital Mortality in Emergency and Critical Care.

Front Med (Lausanne) 2021 8;8:785711. Epub 2021 Nov 8.

Department of Industrial Engineering and Health, Institute of Medical Engineering, Technical University Amberg-Weiden, Weiden, Germany.

We propose a novel method that uses associative classification and odds ratios to predict in-hospital mortality in emergency and critical care. Manual mortality risk scores have previously been used to assess the care needed for each patient and their need for palliative measures. Automated approaches allow providers to get a quick and objective estimation based on electronic health records. We use association rule mining to find relevant patterns in the dataset. The odds ratio is used instead of classical association rule mining metrics as a quality measure to analyze association instead of frequency. The resulting measures are used to estimate the in-hospital mortality risk. We compare two prediction models: one minimal model with socio-demographic factors that are available at the time of admission and can be provided by the patients themselves, namely gender, ethnicity, type of insurance, language, and marital status, and a full model that additionally includes clinical information like diagnoses, medication, and procedures. The method was tested and validated on MIMIC-IV, a publicly available clinical dataset. The minimal prediction model achieved an area under the receiver operating characteristic curve value of 0.69, while the full prediction model achieved a value of 0.98. The models serve different purposes. The minimal model can be used as a first risk assessment based on patient-reported information. The full model expands on this and provides an updated risk assessment each time a new variable occurs in the clinical case. In addition, the rules in the models allow us to analyze the dataset based on data-backed rules. We provide several examples of interesting rules, including rules that hint at errors in the underlying data, rules that correspond to existing epidemiological research, and rules that were previously unknown and can serve as starting points for future studies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fmed.2021.785711DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8606583PMC
November 2021

IJCARS: BVM 2021 special issue.

Int J Comput Assist Radiol Surg 2021 Dec;16(12):2067-2068

Institut für Medizinische Informatik, Biometrie und Epidemiologie, Charité - Universitätsmedizin Berlin, Campus Benjamin Franklin, Hindenburgdamm 30, 12200, Berlin, Germany.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11548-021-02534-7DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8616878PMC
December 2021

Extrapolation of Ventricular Activation Times From Sparse Electroanatomical Data Using Graph Convolutional Neural Networks.

Front Physiol 2021 18;12:694869. Epub 2021 Oct 18.

Digital Technology and Innovation, Siemens Healthineers, Princeton, NJ, United States.

Electroanatomic mapping is the gold standard for the assessment of ventricular tachycardia. Acquiring high resolution electroanatomic maps is technically challenging and may require interpolation methods to obtain dense measurements. These methods, however, cannot recover activation times in the entire biventricular domain. This work investigates the use of graph convolutional neural networks to estimate biventricular activation times from sparse measurements. Our method is trained on more than 15,000 synthetic examples of realistic ventricular depolarization patterns generated by a computational electrophysiology model. Using geometries sampled from a statistical shape model of biventricular anatomy, diverse wave dynamics are induced by randomly sampling scar and border zone distributions, locations of initial activation, and tissue conduction velocities. Once trained, the method accurately reconstructs biventricular activation times in left-out synthetic simulations with a mean absolute error of 3.9 ms ± 4.2 ms at a sampling density of one measurement sample per cm. The total activation time is matched with a mean error of 1.4 ms ± 1.4 ms. A significant decrease in errors is observed in all heart zones with an increased number of samples. Without re-training, the network is further evaluated on two datasets: (1) an in-house dataset comprising four ischemic porcine hearts with dense endocardial activation maps; (2) the CRT-EPIGGY19 challenge data comprising endo- and epicardial measurements of 5 infarcted and 6 non-infarcted swines. In both setups the neural network recovers biventricular activation times with a mean absolute error of less than 10 ms even when providing only a subset of endocardial measurements as input. Furthermore, we present a simple approach to suggest new measurement locations in real-time based on the estimated uncertainty of the graph network predictions. The model-guided selection of measurement locations allows to reduce by 40% the number of measurements required in a random sampling strategy, while achieving the same prediction error. In all the tested scenarios, the proposed approach estimates biventricular activation times with comparable or better performance than a personalized computational model and significant runtime advantages.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fphys.2021.694869DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8558498PMC
October 2021

A disease network-based deep learning approach for characterizing melanoma.

Int J Cancer 2022 Mar 17;150(6):1029-1044. Epub 2021 Nov 17.

College of Computer Science, Sichuan University, Chengdu, China.

Multiple types of genomic variations are present in cutaneous melanoma and some of the genomic features may have an impact on the prognosis of the disease. The access to genomics data via public repositories such as The Cancer Genome Atlas (TCGA) allows for a better understanding of melanoma at the molecular level, therefore making characterization of substantial heterogeneity in melanoma patients possible. Here, we proposed an approach that integrates genomics data, a disease network, and a deep learning model to classify melanoma patients for prognosis, assess the impact of genomic features on the classification and provide interpretation to the impactful features. We integrated genomics data into a melanoma network and applied an autoencoder model to identify subgroups in TCGA melanoma patients. The model utilizes communities identified in the network to effectively reduce the dimensionality of genomics data into a patient score profile. Based on the score profile, we identified three patient subtypes that show different survival times. Furthermore, we quantified and ranked the impact of genomic features on the patient score profile using a machine-learning technique. Follow-up analysis of the top-ranking features provided us with the biological interpretation of them at both pathway and molecular levels, such as their mutation and interactome profiles in melanoma and their involvement in pathways associated with signaling transduction, immune system and cell cycle. Taken together, we demonstrated the ability of the approach to identify disease subgroups using a deep learning model that captures the most relevant information of genomics data in the melanoma network.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/ijc.33860DOI Listing
March 2022

Rigid and Non-rigid Motion Compensation in Weight-bearing CBCT of the Knee using Simulated Inertial Measurements.

IEEE Trans Biomed Eng 2021 Oct 29;PP. Epub 2021 Oct 29.

Objective: Involuntary subject motion is the main source of artifacts in weight-bearing cone-beam CT of the knee. To achieve image quality for clinical diagnosis, the motion needs to be compensated. We propose to use inertial measurement units (IMUs) attached to the leg for motion estimation.

Methods: We perform a simulation study using real motion recorded with an optical tracking system. Three IMU-based correction approaches are evaluated, namely rigid motion correction, non-rigid 2D projection deformation and non-rigid 3D dynamic reconstruction. We present an initialization process based on the system geometry. With an IMU noise simulation, we investigate the applicability of the proposed methods in real applications.

Results: All proposed IMU-based approaches correct motion at least as good as a state-of-the-art marker-based approach. The structural similarity index and the root mean squared error between motion-free and motion corrected volumes are improved by 24-35% and 78-85%, respectively, compared with the uncorrected case. The noise analysis shows that the noise levels of commercially available IMUs need to be improved by a factor of 10 which is currently only achieved by specialized hardware not robust enough for the application.

Conclusion: Our simulation study confirms the feasibility of this novel approach and defines improvements necessary for a real application.

Significance: The presented work lays the foundation for IMU-based motion compensation in cone-beam CT of the knee and creates valuable insights for future developments.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2021.3123673DOI Listing
October 2021

Predicting Anxiety in Routine Palliative Care Using Bayesian-Inspired Association Rule Mining.

Front Digit Health 2021 25;3:724049. Epub 2021 Aug 25.

Department of Palliative Medicine, Comprehensive Cancer Center Erlangen-EMN, Friedrich-Alexander-University, Erlangen-Nürnberg, Germany.

We propose a novel knowledge extraction method based on Bayesian-inspired association rule mining to classify anxiety in heterogeneous, routinely collected data from 9,924 palliative patients. The method extracts association rules mined using lift and local support as selection criteria. The extracted rules are used to assess the maximum evidence supporting and rejecting anxiety for each patient in the test set. We evaluated the predictive accuracy by calculating the area under the receiver operating characteristic curve (AUC). The evaluation produced an AUC of 0.89 and a set of 55 atomic rules with one item in the premise and the conclusion, respectively. The selected rules include variables like pain, nausea, and various medications. Our method outperforms the previous state of the art (AUC = 0.72). We analyzed the relevance and novelty of the mined rules. Palliative experts were asked about the correlation between variables in the data set and anxiety. By comparing expert answers with the retrieved rules, we grouped rules into expected and unexpected ones and found several rules for which experts' opinions and the data-backed rules differ, most notably with the patients' sex. The proposed method offers a novel way to predict anxiety in palliative settings using routinely collected data with an explainable and effective model based on Bayesian-inspired association rule mining. The extracted rules give further insight into potential knowledge gaps in the palliative care field.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3389/fdgth.2021.724049DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8521932PMC
August 2021

Comparison of methods for sensitivity correction in Talbot-Lau computed tomography.

Int J Comput Assist Radiol Surg 2021 Dec 9;16(12):2099-2106. Epub 2021 Sep 9.

Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Germany.

Purpose: In Talbot-Lau X-ray phase contrast imaging, the measured phase value depends on the position of the object in the measurement setup. When imaging large objects, this may lead to inhomogeneous phase contributions within the object. These inhomogeneities introduce artifacts in tomographic reconstructions of the object.

Methods: In this work, we compare recently proposed approaches to correct such reconstruction artifacts. We compare an iterative reconstruction algorithm, a known operator network and a U-net. The methods are qualitatively and quantitatively compared on the Shepp-Logan phantom and on the anatomy of a human abdomen. We also perform a dedicated experiment on the noise behavior of the methods.

Results: All methods were able to reduce the specific artifacts in the reconstructions for the simulated and virtual real anatomy data. The results show method-specific residual errors that are indicative for the inherently different correction approaches. While all methods were able to correct the artifacts, we report a different noise behavior.

Conclusion: The iterative reconstruction performs very well, but at the cost of a high runtime. The known operator network shows consistently a very competitive performance. The U-net performs slightly worse, but has the benefit that it is a general-purpose network that does not require special application knowledge.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11548-021-02487-xDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8616885PMC
December 2021

Enhanced Magnetic Resonance Image Synthesis with Contrast-Aware Generative Adversarial Networks.

J Imaging 2021 Aug 4;7(8). Epub 2021 Aug 4.

Department of Industrial Engineering and Health, Technical University of Applied Sciences Amberg-Weiden, 92637 Weiden, Germany.

A magnetic resonance imaging (MRI) exam typically consists of the acquisition of multiple MR pulse sequences, which are required for a reliable diagnosis. With the rise of generative deep learning models, approaches for the synthesis of MR images are developed to either synthesize additional MR contrasts, generate synthetic data, or augment existing data for AI training. While current generative approaches allow only the synthesis of specific sets of MR contrasts, we developed a method to generate synthetic MR images with adjustable image contrast. Therefore, we trained a generative adversarial network (GAN) with a separate auxiliary classifier (AC) network to generate synthetic MR knee images conditioned on various acquisition parameters (repetition time, echo time, and image orientation). The AC determined the repetition time with a mean absolute error (MAE) of 239.6 ms, the echo time with an MAE of 1.6 ms, and the image orientation with an accuracy of 100%. Therefore, it can properly condition the generator network during training. Moreover, in a visual Turing test, two experts mislabeled 40.5% of real and synthetic MR images, demonstrating that the image quality of the generated synthetic and real MR images is comparable. This work can support radiologists and technologists during the parameterization of MR sequences by previewing the yielded MR contrast, can serve as a valuable tool for radiology training, and can be used for customized data generation to support AI training.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/jimaging7080133DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8404922PMC
August 2021

Fully Automated 3D Cardiac MRI Localisation and Segmentation Using Deep Neural Networks.

J Imaging 2020 Jul 6;6(7). Epub 2020 Jul 6.

Pattern Recognition Lab, Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany.

Cardiac magnetic resonance (CMR) imaging is used widely for morphological assessment and diagnosis of various cardiovascular diseases. Deep learning approaches based on 3D fully convolutional networks (FCNs), have improved state-of-the-art segmentation performance in CMR images. However, previous methods have employed several pre-processing steps and have focused primarily on segmenting low-resolutions images. A crucial step in any automatic segmentation approach is to first localize the cardiac structure of interest within the MRI volume, to reduce false positives and computational complexity. In this paper, we propose two strategies for localizing and segmenting the heart ventricles and myocardium, termed multi-stage and end-to-end, using a 3D convolutional neural network. Our method consists of an encoder-decoder network that is first trained to predict a coarse localized density map of the target structure at a low resolution. Subsequently, a second similar network employs this coarse density map to crop the image at a higher resolution, and consequently, segment the target structure. For the latter, the same two-stage architecture is trained end-to-end. The 3D U-Net with some architectural changes (referred to as 3D DR-UNet) was used as the base architecture in this framework for both the multi-stage and end-to-end strategies. Moreover, we investigate whether the incorporation of coarse features improves the segmentation. We evaluate the two proposed segmentation strategies on two cardiac MRI datasets, namely, the Automatic Cardiac Segmentation Challenge (ACDC) STACOM 2017, and Left Atrium Segmentation Challenge (LASC) STACOM 2018. Extensive experiments and comparisons with other state-of-the-art methods indicate that the proposed multi-stage framework consistently outperforms the rest in terms of several segmentation metrics. The experimental results highlight the robustness of the proposed approach, and its ability to generate accurate high-resolution segmentations, despite the presence of varying degrees of pathology-induced changes to cardiac morphology and image appearance, low contrast, and noise in the CMR volumes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/jimaging6070065DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8321054PMC
July 2020

Validation of a classification and scoring system for the diagnosis of laryngeal and pharyngeal squamous cell carcinomas by confocal laser endomicroscopy.

Braz J Otorhinolaryngol 2021 Jul 20. Epub 2021 Jul 20.

University Hospital Aachen, RWTH Aachen University, Department of Otorhinolaryngology, Head and Neck Surgery, Germany. Electronic address:

Introduction: Confocal laser endomicroscopy is an optical imaging technique that allows in vivo, real-time, microscope-like images of the upper aerodigestive tract's mucosa. The assessment of morphological tissue characteristics for the correct differentiation between healthy and malignant suspected mucosa requires strict evaluation criteria.

Objective: This study aims to validate an eight-point score for the correct assessment of malignancy.

Methods: We performed confocal laser endomicroscopy between March and October 2020 in 13 patients. 197 sequences (11.820 images) originated from the marginal area of pharyngeal and laryngeal carcinomas. Specimens were taken at corresponding locations and analyzed in H&E staining as a standard of reference. A total of six examiners evaluated the sequences based on a scoring system; they were blinded to the histopathological examination. The primary endpoints are sensitivity, specificity, and accuracy. Secondary endpoints are interrater reliability and receiver operator characteristics.

Results: Healthy mucosa showed epithelium with uniform size and shape with distinct cytoplasmic membranes and regular vessel architecture. Confocal laser endomicroscopy of malignant cells demonstrated a disorganized arrangement of variable cellular morphology. We calculated an accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 83.2%, 81.3%, 85.5%, 86.7%, and 79.7%, respectively, with a κ-value of 0.64, and an area under the curve of 0.86.

Conclusion: The results confirm that this scoring system is applicable in the laryngeal and pharyngeal mucosa to classify benign and malignant tissue. A scoring system based on defined and reproducible characteristics can help translate this experimental method to broad clinical practice in head and neck diagnosis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bjorl.2021.06.002DOI Listing
July 2021

The Potential of OMICs Technologies for the Treatment of Immune-Mediated Inflammatory Diseases.

Int J Mol Sci 2021 Jul 13;22(14). Epub 2021 Jul 13.

Department of Internal Medicine 3-Rheumatology and Immunology, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and Universitätsklinikum, 91054 Erlangen, Germany.

Immune-mediated inflammatory diseases (IMIDs), such as inflammatory bowel diseases and inflammatory arthritis (e.g., rheumatoid arthritis, psoriatic arthritis), are marked by increasing worldwide incidence rates. Apart from irreversible damage of the affected tissue, the systemic nature of these diseases heightens the incidence of cardiovascular insults and colitis-associated neoplasia. Only 40-60% of patients respond to currently used standard-of-care immunotherapies. In addition to this limited long-term effectiveness, all current therapies have to be given on a lifelong basis as they are unable to specifically reprogram the inflammatory process and thus achieve a true cure of the disease. On the other hand, the development of various OMICs technologies is considered as "the great hope" for improving the treatment of IMIDs. This review sheds light on the progressive development and the numerous approaches from basic science that gradually lead to the transfer from "bench to bedside" and the implementation into general patient care procedures.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/ijms22147506DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8306614PMC
July 2021

DEDDIAG, a domestic electricity demand dataset of individual appliances in Germany.

Sci Data 2021 07 15;8(1):176. Epub 2021 Jul 15.

Rosenheim Technical University of Applied Sciences, Dep. of Computer Science, 83024, Rosenheim, Germany.

Real-world domestic electricity demand datasets are the key enabler for developing and evaluating machine learning algorithms that facilitate the analysis of demand attribution and usage behavior. Breaking down the electricity demand of domestic households is seen as the key technology for intelligent smart-grid management systems that seek an equilibrium of electricity supply and demand. For the purpose of comparable research, we publish DEDDIAG, a domestic electricity demand dataset of individual appliances in Germany. The dataset contains recordings of 15 homes over a period of up to 3.5 years, wherein total 50 appliances have been recorded at a frequency of 1 Hz. Recorded appliances are of significance for load-shifting purposes such as dishwashers, washing machines and refrigerators. One home also includes three-phase mains readings that can be used for disaggregation tasks. Additionally, DEDDIAG contains manual ground truth event annotations for 14 appliances, that provide precise start and stop timestamps. Such annotations have not been published for any long-term electricity dataset we are aware of.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41597-021-00963-2DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8282868PMC
July 2021

Automating Time-Consuming and Error-Prone Manual Nursing Management Documentation Processes.

Comput Inform Nurs 2021 Jul 12;39(10):584-591. Epub 2021 Jul 12.

Author Affiliations: Department of Industrial Engineering and Health, Technical University Amberg-Weiden (Mr Haas and Dr Rothgang); and Klinikum Weiden, Kliniken Nordoberpfalz AG (Ms Hutzler and Dr Egginger), Weiden; and Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg (Mr Haas and Dr Maier), Germany.

A German regulation requires nursing managers to document patient-nurse ratios. They have to combine heterogeneous hospital data from different sources. Missing documentation or ratios that are too high lead to sanctions. Automated approaches are needed to accelerate the time-consuming and error-prone documentation process. A documentation and visualization system was implemented. The system allows nursing managers to quickly and automatically create the documentation required by the regulation. Interactive visualization dashboards assist with the analysis of patient and staff numbers. The developed method was effectively used in nursing management tasks. No changes to the information technology infrastructure were needed. The new process is around 35 hours per month faster and less error-prone. The documentation functionality automatically reads the required information and correctly calculates the documentation. The visualization functionality allows nursing managers to assess the current patient-nurse ratios before the documentation is submitted. The method scales to multiple wards and locations. It calculates the sanctions to expect and is easily updatable. The proposed method is expected to decrease nursing administration workloads and facilitate the analysis of nursing management data in a cost-effective way.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/CIN.0000000000000790DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8505160PMC
July 2021

Impact of intraepithelial capillary loops and atypical vessels in confocal laser endomicroscopy for the diagnosis of laryngeal and hypopharyngeal squamous cell carcinoma.

Eur Arch Otorhinolaryngol 2021 Jun 29. Epub 2021 Jun 29.

Department of Otorhinolaryngology, Head and Neck Surgery, Friedrich-Alexander-Universität Erlangen-Nürnberg, University Hospital, Waldstrasse 1, 91054, Erlangen, Germany.

Purpose: Confocal laser endomicroscopy (CLE) allows surface imaging of the laryngeal and pharyngeal mucosa in vivo at a thousand-fold magnification. This study aims to compare irregular blood vessels and intraepithelial capillary loops in healthy mucosa and squamous cell carcinoma (SCC) via CLE.

Materials And Methods: We included ten patients with confirmed SCC and planned total laryngectomy in this study between March 2020 and February 2021. CLE images of these patients were collected and compared with the corresponding histology in hematoxylin and eosin staining. We analyzed the characteristic endomicroscopic patterns of blood vessels and intraepithelial capillary loops for the diagnosis of SCC.

Results: In a total of 54 sequences, we identified 243 blood vessels which were analyzed regarding structure, diameter, and Fluorescein leakage, confirming that irregular, corkscrew-like vessels (24.4% vs. 1.3%; P < .001), dilated intraepithelial capillary loops (90.8% vs. 28.7%; P < .001), and increased capillary leakage (40.7% vs. 2.5%; P < .001), are significantly more frequently detected in SCC compared to the healthy epithelium. We defined a vessel diameter of 30 μm in capillary loops as a cut-off value, obtaining a sensitivity, specificity, PPV, and NPV and accuracy of 90.6%, 71.3%, 57.4%, 94.7%, and 77.1%, respectively, for the detection of malignancy based solely on capillary architecture.

Conclusion: Capillaries within malignant lesions are fundamentally different from those in healthy mucosa regions. The capillary architecture is a significant feature aiding the identification of malignant mucosa areas during in-vivo, real-time CLE examination.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00405-021-06954-8DOI Listing
June 2021

MR-contrast-aware image-to-image translations with generative adversarial networks.

Int J Comput Assist Radiol Surg 2021 Dec 20;16(12):2069-2078. Epub 2021 Jun 20.

Department of Industrial Engineering and Health, Technical University of Applied Sciences Amberg-Weiden, Weiden, Germany.

Purpose: A magnetic resonance imaging (MRI) exam typically consists of several sequences that yield different image contrasts. Each sequence is parameterized through multiple acquisition parameters that influence image contrast, signal-to-noise ratio, acquisition time, and/or resolution. Depending on the clinical indication, different contrasts are required by the radiologist to make a diagnosis. As MR sequence acquisition is time consuming and acquired images may be corrupted due to motion, a method to synthesize MR images with adjustable contrast properties is required.

Methods: Therefore, we trained an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time. Our approach is motivated by style transfer networks, whereas the "style" for an image is explicitly given in our case, as it is determined by the MR acquisition parameters our network is conditioned on.

Results: This enables us to synthesize MR images with adjustable image contrast. We evaluated our approach on the fastMRI dataset, a large set of publicly available MR knee images, and show that our method outperforms a benchmark pix2pix approach in the translation of non-fat-saturated MR images to fat-saturated images. Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly.

Conclusion: Our model is the first that enables fine-tuned contrast synthesis, which can be used to synthesize missing MR-contrasts or as a data augmentation technique for AI training in MRI. It can also be used as basis for other image-to-image translation tasks within medical imaging, e.g., to enhance intermodality translation (MRI → CT) or 7 T image synthesis from 3 T MR images.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11548-021-02433-xDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8616894PMC
December 2021

3D Non-Rigid Alignment of Low-Dose Scans Allows to Correct for Saturation in Lower Extremity Cone-Beam CT.

IEEE Access 2021 11;9:71821-71831. Epub 2021 May 11.

Division of Mechanical and Biomedical Engineering, Graduate Program in System Health Science and Engineering, Ewha Womans University, Seoul 03760, South Korea.

Detector saturation in cone-beam computed tomography occurs when an object of highly varying shape and material composition is imaged using an automatic exposure control (AEC) system. When imaging a subject's knees, high beam energy ensures the visibility of internal structures but leads to overexposure in less dense border regions. In this work, we propose to use an additional low-dose scan to correct the saturation artifacts of AEC scans. Overexposed pixels are identified in the projection images of the AEC scan using histogram-based thresholding. The saturation-free pixels from the AEC scan are combined with the skin border pixels of the low-dose scan prior to volumetric reconstruction. To compensate for patient motion between the two scans, a 3D non-rigid alignment of the projection images in a backward-forward-projection process based on fiducial marker positions is proposed. On numerical simulations, the projection combination improved the structural similarity index measure from 0.883 to 0.999. Further evaluations were performed on two subject knee acquisitions, one without and one with motion between the AEC and low-dose scans. Saturation-free reference images were acquired using a beam attenuator. The proposed method could qualitatively restore the information of peripheral tissue structures. Applying the 3D non-rigid alignment made it possible to use the projection images with inter-scan subject motion for projection image combination. The increase in radiation exposure due to the additional low-dose scan was found to be negligibly low. The presented methods allow simple but effective correction of saturation artifacts.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/access.2021.3079368DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8208599PMC
May 2021

Anaerobic Sulfur Oxidation Underlies Adaptation of a Chemosynthetic Symbiont to Oxic-Anoxic Interfaces.

mSystems 2021 Jun 26;6(3):e0118620. Epub 2021 May 26.

University of Vienna, Department of Functional and Evolutionary Ecology, Environmental Cell Biology Group, Vienna, Austria.

Chemosynthetic symbioses occur worldwide in marine habitats, but comprehensive physiological studies of chemoautotrophic bacteria thriving on animals are scarce. Stilbonematinae are coated by thiotrophic . As these nematodes migrate through the redox zone, their ectosymbionts experience varying oxygen concentrations. However, nothing is known about how these variations affect their physiology. Here, by applying omics, Raman microspectroscopy, and stable isotope labeling, we investigated the effect of oxygen on " Thiosymbion oneisti." Unexpectedly, sulfur oxidation genes were upregulated in anoxic relative to oxic conditions, but carbon fixation genes and incorporation of C-labeled bicarbonate were not. Instead, several genes involved in carbon fixation were upregulated under oxic conditions, together with genes involved in organic carbon assimilation, polyhydroxyalkanoate (PHA) biosynthesis, nitrogen fixation, and urea utilization. Furthermore, in the presence of oxygen, stress-related genes were upregulated together with vitamin biosynthesis genes likely necessary to withstand oxidative stress, and the symbiont appeared to proliferate less. Based on its physiological response to oxygen, we propose that " T. oneisti" may exploit anaerobic sulfur oxidation coupled to denitrification to proliferate in anoxic sand. However, the ectosymbiont would still profit from the oxygen available in superficial sand, as the energy-efficient aerobic respiration would facilitate carbon and nitrogen assimilation. Chemoautotrophic endosymbionts are famous for exploiting sulfur oxidization to feed marine organisms with fixed carbon. However, the physiology of thiotrophic bacteria thriving on the surface of animals (ectosymbionts) is less understood. One longstanding hypothesis posits that attachment to animals that migrate between reduced and oxic environments would boost sulfur oxidation, as the ectosymbionts would alternatively access sulfide and oxygen, the most favorable electron acceptor. Here, we investigated the effect of oxygen on the physiology of " Thiosymbion oneisti," a gammaproteobacterium which lives attached to marine nematodes inhabiting shallow-water sand. Surprisingly, sulfur oxidation genes were upregulated under anoxic relative to oxic conditions. Furthermore, under anoxia, the ectosymbiont appeared to be less stressed and to proliferate more. We propose that animal-mediated access to oxygen, rather than enhancing sulfur oxidation, would facilitate assimilation of carbon and nitrogen by the ectosymbiont.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1128/mSystems.01186-20DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8269255PMC
June 2021

Robust classification from noisy labels: Integrating additional knowledge for chest radiography abnormality assessment.

Med Image Anal 2021 08 24;72:102087. Epub 2021 Apr 24.

Digital Technology and Inovation, Siemens Healthineers, Princeton, NJ 08540, USA.

Chest radiography is the most common radiographic examination performed in daily clinical practice for the detection of various heart and lung abnormalities. The large amount of data to be read and reported, with more than 100 studies per day for a single radiologist, poses a challenge in consistently maintaining high interpretation accuracy. The introduction of large-scale public datasets has led to a series of novel systems for automated abnormality classification. However, the labels of these datasets were obtained using natural language processed medical reports, yielding a large degree of label noise that can impact the performance. In this study, we propose novel training strategies that handle label noise from such suboptimal data. Prior label probabilities were measured on a subset of training data re-read by 4 board-certified radiologists and were used during training to increase the robustness of the training model to the label noise. Furthermore, we exploit the high comorbidity of abnormalities observed in chest radiography and incorporate this information to further reduce the impact of label noise. Additionally, anatomical knowledge is incorporated by training the system to predict lung and heart segmentation, as well as spatial knowledge labels. To deal with multiple datasets and images derived from various scanners that apply different post-processing techniques, we introduce a novel image normalization strategy. Experiments were performed on an extensive collection of 297,541 chest radiographs from 86,876 patients, leading to a state-of-the-art performance level for 17 abnormalities from 2 datasets. With an average AUC score of 0.880 across all abnormalities, our proposed training strategies can be used to significantly improve performance scores.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.media.2021.102087DOI Listing
August 2021

Optimal Beam Loading in a Laser-Plasma Accelerator.

Phys Rev Lett 2021 Apr;126(17):174801

Center for Free-Electron Laser Science and Department of Physics Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany.

Applications of laser-plasma accelerators demand low energy spread beams and high-efficiency operation. Achieving both requires flattening the accelerating fields by controlled beam loading of the plasma wave. Here, we optimize the generation of an electron bunch via localized ionization injection, such that the combination of injected current profile and averaged acceleration dynamics results in optimal beam loading conditions. This enables the reproducible production of 1.2% rms energy spread bunches with 282 MeV and 44 pC at an estimated energy-transfer efficiency of ∼19%. We correlate shot-to-shot variations to reveal the phase space dynamics and train a neural network that predicts the beam quality as a function of the drive laser.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1103/PhysRevLett.126.174801DOI Listing
April 2021

Deep learning methods allow fully automated segmentation of metacarpal bones to quantify volumetric bone mineral density.

Sci Rep 2021 05 6;11(1):9697. Epub 2021 May 6.

Pattern Recognition Lab-Computer Science, Friedrich-Alexander Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.

Arthritis patients develop hand bone loss, which leads to destruction and functional impairment of the affected joints. High resolution peripheral quantitative computed tomography (HR-pQCT) allows the quantification of volumetric bone mineral density (vBMD) and bone microstructure in vivo with an isotropic voxel size of 82 micrometres. However, image-processing to obtain bone characteristics is a time-consuming process as it requires semi-automatic segmentation of the bone. In this work, a fully automatic vBMD measurement pipeline for the metacarpal (MC) bone using deep learning methods is introduced. Based on a dataset of HR-pQCT volumes with MC measurements for 541 patients with arthritis, a segmentation network is trained. The best network achieves an intersection over union as high as 0.94 and a Dice similarity coefficient of 0.97 while taking only 33 s to process a whole patient yielding a speedup between 2.5 and 4.0 for the whole workflow. Strong correlation between the vBMD measurements of the expert and of the automatic pipeline are achieved for the average bone density with 0.999 (Pearson) and 0.996 (Spearman's rank) with [Formula: see text] for all correlations. A qualitative assessment of the network predictions and the manual annotations yields a 65.9% probability that the expert favors the network predictions. Further, the steps to integrate the pipeline into the clinical workflow are shown. In order to make these workflow improvements available to others, we openly share the code of this work.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-021-89111-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8102473PMC
May 2021

"Keep it simple, scholar": an experimental analysis of few-parameter segmentation networks for retinal vessels in fundus imaging.

Int J Comput Assist Radiol Surg 2021 Jun 30;16(6):967-978. Epub 2021 Apr 30.

Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.

Purpose: With the recent development of deep learning technologies, various neural networks have been proposed for fundus retinal vessel segmentation. Among them, the U-Net is regarded as one of the most successful architectures. In this work, we start with simplification of the U-Net, and explore the performance of few-parameter networks on this task.

Methods: We firstly modify the model with popular functional blocks and additional resolution levels, then we switch to exploring the limits for compression of the network architecture. Experiments are designed to simplify the network structure, decrease the number of trainable parameters, and reduce the amount of training data. Performance evaluation is carried out on four public databases, namely DRIVE, STARE, HRF and CHASE_DB1. In addition, the generalization ability of the few-parameter networks are compared against the state-of-the-art segmentation network.

Results: We demonstrate that the additive variants do not significantly improve the segmentation performance. The performance of the models are not severely harmed unless they are harshly degenerated: one level, or one filter in the input convolutional layer, or trained with one image. We also demonstrate that few-parameter networks have strong generalization ability.

Conclusion: It is counter-intuitive that the U-Net produces reasonably good segmentation predictions until reaching the mentioned limits. Our work has two main contributions. On the one hand, the importance of different elements of the U-Net is evaluated, and the minimal U-Net which is capable of the task is presented. On the other hand, our work demonstrates that retinal vessel segmentation can be tackled by surprisingly simple configurations of U-Net reaching almost state-of-the-art performance. We also show that the simple configurations have better generalization ability than state-of-the-art models with high model complexity. These observations seem to be in contradiction to the current trend of continued increase in model complexity and capacity for the task under consideration.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s11548-021-02340-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8166700PMC
June 2021

Radon Adsorption in Charcoal.

Int J Environ Res Public Health 2021 04 22;18(9). Epub 2021 Apr 22.

Biophysics Department, GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291 Darmstadt, Germany.

Radon is pervasive in our environment and the second leading cause of lung cancer induction after smoking. Therefore, the measurement of radon activity concentrations in homes is important. The use of charcoal is an easy and cost-efficient method for this purpose, as radon can bind to charcoal via Van der Waals interaction. Admittedly, there are potential influencing factors during exposure that can distort the results and need to be investigated. Consequently, charcoal was exposed in a radon chamber at different parameters. Afterward, the activity of the radon decay products Pb and Bi was measured and extrapolated to the initial radon activity in the sample. After an exposure of 1 h, around 94% of the maximum value was attained and used as a limit for the subsequent exposure time. Charcoal was exposed at differing humidity ranging from 5 to 94%, but no influence on radon adsorption could be detected. If the samples were not sealed after exposure, radon desorbed with an effective half-life of around 31 h. There is also a strong dependence of radon uptake on the chemical structure of the recipient material, which is interesting for biological materials or diffusion barriers as this determines accumulation and transport.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/ijerph18094454DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8122700PMC
April 2021

X-Ray Scatter Estimation Using Deep Splines.

IEEE Trans Med Imaging 2021 09 31;40(9):2272-2283. Epub 2021 Aug 31.

X-ray scatter compensation is a very desirable technique in flat-panel X-ray imaging and cone-beam computed tomography. State-of-the-art U-net based scatter removal approaches yielded promising results. However, as there are no physics' constraints applied to the output of the U-Net, it cannot be ruled out that it yields spurious results. Unfortunately, in the context of medical imaging, those may be misleading and could lead to wrong conclusions. To overcome this problem, we propose to embed B-splines as a known operator into neural networks. This inherently constrains their predictions to well-behaved and smooth functions. In a study using synthetic head and thorax data as well as real thorax phantom data, we found that our approach performed on par with U-net when comparing both algorithms based on quantitative performance metrics. However, our approach not only reduces runtime and parameter complexity, but we also found it much more robust to unseen noise levels. While the U-net responded with visible artifacts, the proposed approach preserved the X-ray signal's frequency characteristics.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3074712DOI Listing
September 2021

Quantifying the separability of data classes in neural networks.

Neural Netw 2021 Jul 5;139:278-293. Epub 2021 Apr 5.

Neuroscience Lab, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg (FAU), Germany; Cognitive Neuroscience Center, University of Groningen, The Netherlands. Electronic address:

We introduce the Generalized Discrimination Value (GDV) that measures, in a non-invasive manner, how well different data classes separate in each given layer of an artificial neural network. It turns out that, at the end of the training period, the GDV in each given layer L attains a highly reproducible value, irrespective of the initialization of the network's connection weights. In the case of multi-layer perceptrons trained with error backpropagation, we find that classification of highly complex data sets requires a temporal reduction of class separability, marked by a characteristic 'energy barrier' in the initial part of the GDV(L) curve. Even more surprisingly, for a given data set, the GDV(L) is running through a fixed 'master curve', independently from the total number of network layers. Finally, due to its invariance with respect to dimensionality, the GDV may serve as a useful tool to compare the internal representational dynamics of artificial neural networks with different architectures for neural architecture search or network compression; or even with brain activity in order to decide between different candidate models of brain function.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2021.03.035DOI Listing
July 2021
-->