Publications by authors named "Dennis Hendriksen"

10 Publications

  • Page 1 of 1

CAPICE: a computational method for Consequence-Agnostic Pathogenicity Interpretation of Clinical Exome variations.

Genome Med 2020 08 24;12(1):75. Epub 2020 Aug 24.

Department of Genetics, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands.

Exome sequencing is now mainstream in clinical practice. However, identification of pathogenic Mendelian variants remains time-consuming, in part, because the limited accuracy of current computational prediction methods requires manual classification by experts. Here we introduce CAPICE, a new machine-learning-based method for prioritizing pathogenic variants, including SNVs and short InDels. CAPICE outperforms the best general (CADD, GAVIN) and consequence-type-specific (REVEL, ClinPred) computational prediction methods, for both rare and ultra-rare variants. CAPICE is easily added to diagnostic pipelines as pre-computed score file or command-line software, or using online MOLGENIS web service with API. Download CAPICE for free and open-source (LGPLv3) at https://github.com/molgenis/capice .
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/s13073-020-00775-wDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7446154PMC
August 2020

MOLGENIS research: advanced bioinformatics data software for non-bioinformaticians.

Bioinformatics 2019 03;35(6):1076-1078

Genomics Coordination Center, University of Groningen and University Medical Center Groningen, Groningen, The Netherlands.

Motivation: The volume and complexity of biological data increases rapidly. Many clinical professionals and biomedical researchers without a bioinformatics background are generating big '-omics' data, but do not always have the tools to manage, process or publicly share these data.

Results: Here we present MOLGENIS Research, an open-source web-application to collect, manage, analyze, visualize and share large and complex biomedical datasets, without the need for advanced bioinformatics skills.

Availability And Implementation: MOLGENIS Research is freely available (open source software). It can be installed from source code (see http://github.com/molgenis), downloaded as a precompiled WAR file (for your own server), setup inside a Docker container (see http://molgenis.github.io), or requested as a Software-as-a-Service subscription. For a public demo instance and complete installation instructions see http://molgenis.org/research.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/bioinformatics/bty742DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6419911PMC
March 2019

BiobankUniverse: automatic matchmaking between datasets for biobank data discovery and integration.

Bioinformatics 2017 Nov;33(22):3627-3634

Department of Genetics, Genomics Coordination Center, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.

Motivation: Biobanks are indispensable for large-scale genetic/epidemiological studies, yet it remains difficult for researchers to determine which biobanks contain data matching their research questions.

Results: To overcome this, we developed a new matching algorithm that identifies pairs of related data elements between biobanks and research variables with high precision and recall. It integrates lexical comparison, Unified Medical Language System ontology tagging and semantic query expansion. The result is BiobankUniverse, a fast matchmaking service for biobanks and researchers. Biobankers upload their data elements and researchers their desired study variables, BiobankUniverse automatically shortlists matching attributes between them. Users can quickly explore matching potential and search for biobanks/data elements matching their research. They can also curate matches and define personalized data-universes.

Availability And Implementation: BiobankUniverse is available at http://biobankuniverse.com or can be downloaded as part of the open source MOLGENIS suite at http://github.com/molgenis/molgenis.

Contact: m.a.swertz@rug.nl.

Supplementary Information: Supplementary data are available at Bioinformatics online.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/bioinformatics/btx478DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5870622PMC
November 2017

MOLGENIS/connect: a system for semi-automatic integration of heterogeneous phenotype data with applications in biobanks.

Bioinformatics 2016 07 21;32(14):2176-83. Epub 2016 Mar 21.

Department of Genetics, University Medical Center Groningen, Genomics Coordination Center, University of Groningen, Groningen, The Netherlands Department of Epidemiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.

Motivation: While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration.

Results: To address this challenge, we developed MOLGENIS/connect, a semi-automatic system to find, match and pool data from different sources. The system shortlists relevant source attributes from thousands of candidates using ontology-based query expansion to overcome variations in terminology. Then it generates algorithms that transform source attributes to a common target DataSchema. These include unit conversion, categorical value matching and complex conversion patterns (e.g. calculation of BMI). In comparison to human-experts, MOLGENIS/connect was able to auto-generate 27% of the algorithms perfectly, with an additional 46% needing only minor editing, representing a reduction in the human effort and expertise needed to pool data.

Availability And Implementation: Source code, binaries and documentation are available as open-source under LGPLv3 from http://github.com/molgenis/molgenis and www.molgenis.org/connect

Contact: : m.a.swertz@rug.nl

Supplementary Information: Supplementary data are available at Bioinformatics online.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/bioinformatics/btw155DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4937195PMC
July 2016

SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data.

Database (Oxford) 2015 18;2015. Epub 2015 Sep 18.

University of Groningen, University Medical Centre Groningen, Genomics Coordination Centre, Department of Genetics, Groningen, The Netherlands, University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, The Netherlands and LifeLines Cohort Study and Biobank, Groningen, The Netherlands

There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA's applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/database/bav089DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4574036PMC
May 2016

Genotype harmonizer: automatic strand alignment and format conversion for genotype data integration.

BMC Res Notes 2014 Dec 11;7:901. Epub 2014 Dec 11.

University of Groningen, University Medical Center Groningen, Genomics Coordination Center, Groningen, the Netherlands.

Background: To gain statistical power or to allow fine mapping, researchers typically want to pool data before meta-analyses or genotype imputation. However, the necessary harmonization of genetic datasets is currently error-prone because of many different file formats and lack of clarity about which genomic strand is used as reference.

Findings: Genotype Harmonizer (GH) is a command-line tool to harmonize genetic datasets by automatically solving issues concerning genomic strand and file format. GH solves the unknown strand issue by aligning ambiguous A/T and G/C SNPs to a specified reference, using linkage disequilibrium patterns without prior knowledge of the used strands. GH supports many common GWAS/NGS genotype formats including PLINK, binary PLINK, VCF, SHAPEIT2 & Oxford GEN. GH is implemented in Java and a large part of the functionality can also be used as Java 'Genotype-IO' API. All software is open source under license LGPLv3 and available from http://www.molgenis.org/systemsgenetics.

Conclusions: GH can be used to harmonize genetic datasets across different file formats and can be easily integrated as a step in routine meta-analysis and imputation pipelines.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1186/1756-0500-7-901DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4307387PMC
December 2014

BiobankConnect: software to rapidly connect data elements for pooled analysis across biobanks using ontological and lexical indexing.

J Am Med Inform Assoc 2015 Jan 31;22(1):65-75. Epub 2014 Oct 31.

Department of Genetics, Genomics Coordination Center, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands Groningen Bioinformatics Center, University of Groningen, Groningen, The Netherlands.

Objective: Pooling data across biobanks is necessary to increase statistical power, reveal more subtle associations, and synergize the value of data sources. However, searching for desired data elements among the thousands of available elements and harmonizing differences in terminology, data collection, and structure, is arduous and time consuming.

Materials And Methods: To speed up biobank data pooling we developed BiobankConnect, a system to semi-automatically match desired data elements to available elements by: (1) annotating the desired elements with ontology terms using BioPortal; (2) automatically expanding the query for these elements with synonyms and subclass information using OntoCAT; (3) automatically searching available elements for these expanded terms using Lucene lexical matching; and (4) shortlisting relevant matches sorted by matching score.

Results: We evaluated BiobankConnect using human curated matches from EU-BioSHaRE, searching for 32 desired data elements in 7461 available elements from six biobanks. We found 0.75 precision at rank 1 and 0.74 recall at rank 10 compared to a manually curated set of relevant matches. In addition, best matches chosen by BioSHaRE experts ranked first in 63.0% and in the top 10 in 98.4% of cases, indicating that our system has the potential to significantly reduce manual matching work.

Conclusions: BiobankConnect provides an easy user interface to significantly speed up the biobank harmonization process. It may also prove useful for other forms of biomedical data integration. All the software can be downloaded as a MOLGENIS open source app from http://www.github.com/molgenis, with a demo available at http://www.biobankconnect.org.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1136/amiajnl-2013-002577DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4433361PMC
January 2015

Evaluation of sampling density on the accuracy of aortic pulse wave velocity from velocity-encoded MRI in patients with Marfan syndrome.

J Magn Reson Imaging 2012 Dec 22;36(6):1470-6. Epub 2012 Jun 22.

Department of Cardiology, Leiden University Medical Center, Leiden, The Netherlands.

Purpose: To evaluate the effect of spatial (ie, number of sampling locations along the aorta) and temporal sampling density on aortic pulse wave velocity (PWV) assessment from velocity-encoded MRI in patients with Marfan syndrome (MFS).

Materials And Methods: Twenty-three MFS patients (12 men, mean age 36 ± 14 years) were included. Three PWV-methods were evaluated: 1) reference PWV(i.p.) from in-plane velocity-encoded MRI with dense temporal and spatial sampling; 2) conventional PWV(t.p.) from through-plane velocity-encoded MRI with dense temporal but sparse spatial sampling at three aortic locations; 3) EPI-accelerated PWV(t.p.) with sparse temporal but improved spatial sampling at five aortic locations with acceleration by echo-planar imaging (EPI).

Results: Despite inferior temporal resolution, EPI-accelerated PWV(t.p.) showed stronger correlation (r = 0.92 vs. r = 0.65, P = 0.03) with reference PWV(i.p.) in the total aorta, with less error (8% vs. 16%) and variation (11% vs. 27%) as compared to conventional PWV(t.p.) . In the aortic arch, correlation was comparable for both EPI-accelerated and conventional PWV(t.p.) with reference PWV(i.p.) (r = 0.66 vs. r = 0.67, P = 0.46), albeit 92% scan-time reduction by EPI-acceleration.

Conclusion: Improving spatial sampling density by adding two acquisition planes along the aorta results in more accurate PWV assessment, even when temporal resolution decreases. For regional PWV assessment in the aortic arch, EPI-accelerated and conventional PWV assessment are comparably accurate. Scan-time reduction makes EPI-accelerated PWV assessment the preferred method of choice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jmri.23729DOI Listing
December 2012

Improved aortic pulse wave velocity assessment from multislice two-directional in-plane velocity-encoded magnetic resonance imaging.

J Magn Reson Imaging 2010 Nov;32(5):1086-94

Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands.

Purpose: To evaluate the accuracy and reproducibility of aortic pulse wave velocity (PWV) assessment by in-plane velocity-encoded magnetic resonance imaging (MRI).

Materials And Methods: In 14 patients selected for cardiac catheterization on suspicion of coronary artery disease and 15 healthy volunteers, PWV was assessed with multislice two-directional in-plane velocity-encoded MRI (PWV(i.p.)) and compared with conventionally assessed PWV from multisite one-directional through-plane velocity-encoded MRI (PWV(t.p.)). In patients, PWV was also obtained from intraarterially acquired pressure-time curves (PWV(pressure)), which is considered the gold standard reference method. In volunteers, PWV(i.p.) and PWV(t.p.) were obtained in duplicate in the same examination to test reproducibility.

Results: In patients, PWV(i.p.) showed stronger correlation and similar variation with PWV(pressure) than PWV(t.p.) (Pearson correlation r = 0.75 vs. r = 0.58, and coefficient of variation [COV] = 10% vs. COV = 12%, respectively). In volunteers, repeated PWV(i.p.) assessment showed stronger correlation and less variation than repeated PWV(t.p.) (proximal aorta: r = 0.97 and COV = 10% vs. r = 0.69 and COV = 17%; distal aorta: r = 0.94 and COV = 12% vs. r = 0.90 and COV = 16%; total aorta: r = 0.97 and COV = 7% vs. r = 0.90 and COV = 13%).

Conclusion: PWV(i.p.) is an improvement over conventional PWV(t.p.) by showing higher agreement as compared to the gold standard (PWV(pressure)) and higher reproducibility for repeated MRI assessment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jmri.22359DOI Listing
November 2010

Automated contour detection in cardiac MRI using active appearance models: the effect of the composition of the training set.

Invest Radiol 2007 Oct;42(10):697-703

Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands.

Objective: Definition of the optimal training set for the automated segmentation of short-axis left ventricular magnetic resonance (MR) imaging studies in clinical practice based on active appearance model (AAM).

Materials And Methods: We investigated the segmentation accuracy by varying the size and composition of the training set (ie, the ratio between pathologic and normal ventricle images, and the vendor dependence). The accuracy was assessed using the degree of similarity and the difference in ejection fraction between automatically detected and manually drawn contours.

Results: Including more images in the training set results in a better accuracy of the detected contours, with optimum results achieved when including 180 images in the training set. Using AAM-based contour detection with a mixed model of 80% normal-20% pathologic images does provide good segmentation accuracy in clinical routine. Finally, it is essential to define different AAM models for different vendors of MRI systems.

Conclusions: A model defined on a sufficient number of images with the correct distribution of image characteristics achieves good matches in clinical routine. It is essential to define different AAM models for different vendors of MRI systems.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1097/RLI.0b013e318070dc93DOI Listing
October 2007