Publications by authors named "Mai Xu"

82 Publications

A UVB-responsive common variant at chromosome band 7p21.1 confers tanning response and melanoma risk via regulation of the aryl hydrocarbon receptor, AHR.

Am J Hum Genet 2021 09 2;108(9):1611-1630. Epub 2021 Aug 2.

Laboratory of Translational Genomics, Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD 20892, USA. Electronic address:

Genome-wide association studies (GWASs) have identified a melanoma-associated locus on chromosome band 7p21.1 with rs117132860 as the lead SNP and a secondary independent signal marked by rs73069846. rs117132860 is also associated with tanning ability and cutaneous squamous cell carcinoma (cSCC). Because ultraviolet radiation (UVR) is a key environmental exposure for all three traits, we investigated the mechanisms by which this locus contributes to melanoma risk, focusing on cellular response to UVR. Fine-mapping of melanoma GWASs identified four independent sets of candidate causal variants. A GWAS region-focused Capture-C study of primary melanocytes identified physical interactions between two causal sets and the promoter of the aryl hydrocarbon receptor (AHR). Subsequent chromatin state annotation, eQTL, and luciferase assays identified rs117132860 as a functional variant and reinforced AHR as a likely causal gene. Because AHR plays critical roles in cellular response to dioxin and UVR, we explored links between this SNP and AHR expression after both 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and ultraviolet B (UVB) exposure. Allele-specific AHR binding to rs117132860-G was enhanced following both, consistent with predicted weakened AHR binding to the risk/poor-tanning rs117132860-A allele, and allele-preferential AHR expression driven from the protective rs117132860-G allele was observed following UVB exposure. Small deletions surrounding rs117132860 introduced via CRISPR abrogates AHR binding, reduces melanocyte cell growth, and prolongs growth arrest following UVB exposure. These data suggest AHR is a melanoma susceptibility gene at the 7p21.1 risk locus and rs117132860 is a functional variant within a UVB-responsive element, leading to allelic AHR expression and altering melanocyte growth phenotypes upon exposure.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ajhg.2021.07.002DOI Listing
September 2021

Cell-type-specific meQTLs extend melanoma GWAS annotation beyond eQTLs and inform melanocyte gene-regulatory mechanisms.

Am J Hum Genet 2021 09 21;108(9):1631-1646. Epub 2021 Jul 21.

Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD 20892, USA. Electronic address:

Although expression quantitative trait loci (eQTLs) have been powerful in identifying susceptibility genes from genome-wide association study (GWAS) findings, most trait-associated loci are not explained by eQTLs alone. Alternative QTLs, including DNA methylation QTLs (meQTLs), are emerging, but cell-type-specific meQTLs using cells of disease origin have been lacking. Here, we established an meQTL dataset by using primary melanocytes from 106 individuals and identified 1,497,502 significant cis-meQTLs. Multi-QTL colocalization with meQTLs, eQTLs, and mRNA splice-junction QTLs from the same individuals together with imputed methylome-wide and transcriptome-wide association studies identified candidate susceptibility genes at 63% of melanoma GWAS loci. Among the three molecular QTLs, meQTLs were the single largest contributor. To compare melanocyte meQTLs with those from malignant melanomas, we performed meQTL analysis on skin cutaneous melanomas from The Cancer Genome Atlas (n = 444). A substantial proportion of meQTL probes (45.9%) in primary melanocytes is preserved in melanomas, while a smaller fraction of eQTL genes is preserved (12.7%). Integration of melanocyte multi-QTLs and melanoma meQTLs identified candidate susceptibility genes at 72% of melanoma GWAS loci. Beyond GWAS annotation, meQTL-eQTL colocalization in melanocytes suggested that 841 unique genes potentially share a causal variant with a nearby methylation probe in melanocytes. Finally, melanocyte trans-meQTLs identified a hotspot for rs12203592, a cis-eQTL of a transcription factor, IRF4, with 131 candidate target CpGs. Motif enrichment and IRF4 ChIP-seq analysis demonstrated that these target CpGs are enriched in IRF4 binding sites, suggesting an IRF4-mediated regulatory network. Our study highlights the utility of cell-type-specific meQTLs.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.ajhg.2021.06.018DOI Listing
September 2021

Patch-Wise Spatial-Temporal Quality Enhancement for HEVC Compressed Video.

IEEE Trans Image Process 2021 15;30:6459-6472. Epub 2021 Jul 15.

Recently, many deep learning based researches are conducted to explore the potential quality improvement of compressed videos. These methods mostly utilize either the spatial or temporal information to perform frame-level video enhancement. However, they fail in combining different spatial-temporal information to adaptively utilize adjacent patches to enhance the current patch and achieve limited enhancement performance especially on scene-changing and strong-motion videos. To overcome these limitations, we propose a patch-wise spatial-temporal quality enhancement network which firstly extracts spatial and temporal features, then recalibrates and fuses the obtained spatial and temporal features. Specifically, we design a temporal and spatial-wise attention-based feature distillation structure to adaptively utilize the adjacent patches for distilling patch-wise temporal features. For adaptively enhancing different patch with spatial and temporal information, a channel and spatial-wise attention fusion block is proposed to achieve patch-wise recalibration and fusion of spatial and temporal features. Experimental results demonstrate our network achieves peak signal-to-noise ratio improvement, 0.55 - 0.69 dB compared with the compressed videos at different quantization parameters, outperforming state-of-the-art approach.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2021.3092949DOI Listing
July 2021

Novel MAPK/AKT-impairing germline NRAS variant identified in a melanoma-prone family.

Fam Cancer 2021 Jul 3. Epub 2021 Jul 3.

Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, 9609 Medical Center Drive, EPS 7106, Bethesda, MD, 20892, USA.

While several high-penetrance melanoma risk genes are known, variation in these genes fail to explain melanoma susceptibility in a large proportion of high-risk families. As part of a melanoma family sequencing study, including 435 families from Mediterranean populations we identified a novel NRAS variant (c.170A > C, p.D57A) in an Italian melanoma-prone family. This variant is absent in exomes in gnomAD, ESP, UKBiobank, and the 1000 Genomes Project, as well as in 11,273 Mediterranean individuals and 109 melanoma-prone families from the US and Australia. This variant occurs in the GTP-binding pocket of NRAS. Differently from other RAS activating alterations, NRAS D57A expression is unable to activate MAPK-pathway both constitutively and after stimulation but enhances EGF-induced PI3K-pathway signaling in serum starved conditions in vitro. Consistent with in vitro data demonstrating that NRAS D57A does not enrich GTP binding, molecular modeling suggests that the D57A substitution would be expected to impair Mg2 + binding and decrease nucleotide-binding and GTPase activity of NRAS. While we cannot firmly establish NRAS c.170A > C (p.D57A) as a melanoma susceptibility variant, further investigation of NRAS as a familial melanoma gene is warranted.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10689-021-00267-9DOI Listing
July 2021

DeepQTMT: A Deep Learning Approach for Fast QTMT-Based CU Partition of Intra-Mode VVC.

IEEE Trans Image Process 2021 3;30:5377-5390. Epub 2021 Jun 3.

Versatile Video Coding (VVC), as the latest standard, significantly improves the coding efficiency over its predecessor standard High Efficiency Video Coding (HEVC), but at the expense of sharply increased complexity. In VVC, the quad-tree plus multi-type tree (QTMT) structure of the coding unit (CU) partition accounts for over 97% of the encoding time, due to the brute-force search for recursive rate-distortion (RD) optimization. Instead of the brute-force QTMT search, this paper proposes a deep learning approach to predict the QTMT-based CU partition, for drastically accelerating the encoding process of intra-mode VVC. First, we establish a large-scale database containing sufficient CU partition patterns with diverse video content, which can facilitate the data-driven VVC complexity reduction. Next, we propose a multi-stage exit CNN (MSE-CNN) model with an early-exit mechanism to determine the CU partition, in accord with the flexible QTMT structure at multiple stages. Then, we design an adaptive loss function for training the MSE-CNN model, synthesizing both the uncertain number of split modes and the target on minimized RD cost. Finally, a multi-threshold decision scheme is developed, achieving a desirable trade-off between complexity and RD performance. The experimental results demonstrate that our approach can reduce the encoding time of VVC by 44.65%~66.88% with a negligible Bjøntegaard delta bit-rate (BD-BR) of 1.322%~3.188%, significantly outperforming other state-of-the-art approaches.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2021.3083447DOI Listing
June 2021

Joint Learning of 3D Lesion Segmentation and Classification for Explainable COVID-19 Diagnosis.

IEEE Trans Med Imaging 2021 09 31;40(9):2463-2476. Epub 2021 Aug 31.

Given the outbreak of COVID-19 pandemic and the shortage of medical resource, extensive deep learning models have been proposed for automatic COVID-19 diagnosis, based on 3D computed tomography (CT) scans. However, the existing models independently process the 3D lesion segmentation and disease classification, ignoring the inherent correlation between these two tasks. In this paper, we propose a joint deep learning model of 3D lesion segmentation and classification for diagnosing COVID-19, called DeepSC-COVID, as the first attempt in this direction. Specifically, we establish a large-scale CT database containing 1,805 3D CT scans with fine-grained lesion annotations, and reveal 4 findings about lesion difference between COVID-19 and community acquired pneumonia (CAP). Inspired by our findings, DeepSC-COVID is designed with 3 subnets: a cross-task feature subnet for feature extraction, a 3D lesion subnet for lesion segmentation, and a classification subnet for disease diagnosis. Besides, the task-aware loss is proposed for learning the task interaction across the 3D lesion and classification subnets. Different from all existing models for COVID-19 diagnosis, our model is interpretable with fine-grained 3D lesion distribution. Finally, extensive experimental results show that the joint learning framework in our model significantly improves the performance of 3D lesion segmentation and disease classification in both efficiency and efficacy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2021.3079709DOI Listing
September 2021

Semantic Perceptual Image Compression With a Laplacian Pyramid of Convolutional Networks.

IEEE Trans Image Process 2021 12;30:4225-4237. Epub 2021 Apr 12.

The existing image compression methods usually choose or optimize low-level representation manually. Actually, these methods struggle for the texture restoration at low bit rates. Recently, deep neural network (DNN)-based image compression methods have achieved impressive results. To achieve better perceptual quality, generative models are widely used, especially generative adversarial networks (GAN). However, training GAN is intractable, especially for high-resolution images, with the challenges of unconvincing reconstructions and unstable training. To overcome these problems, we propose a novel DNN-based image compression framework in this paper. The key point is decomposing an image into multi-scale sub-images using the proposed Laplacian pyramid based multi-scale networks. For each pyramid scale, we train a specific DNN to exploit the compressive representation. Meanwhile, each scale is optimized with different aspects, including pixel, semantics, distribution and entropy, for a good "rate-distortion-perception" trade-off. By independently optimizing each pyramid scale, we make each stage manageable and make each sub-image plausible. Experimental results demonstrate that our method achieves state-of-the-art performance, with advantages over existing methods in providing improved visual quality. Additionally, a better performance in the down-stream visual analysis tasks which are conducted on the reconstructed images, validates the excellent semantics-preserving ability of the proposed method.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2021.3065244DOI Listing
April 2021

A hierarchical deep learning approach with transparency and interpretability based on small samples for glaucoma diagnosis.

NPJ Digit Med 2021 Mar 11;4(1):48. Epub 2021 Mar 11.

Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology & Visual Science Key Lab, Beijing, China.

The application of deep learning algorithms for medical diagnosis in the real world faces challenges with transparency and interpretability. The labeling of large-scale samples leads to costly investment in developing deep learning algorithms. The application of human prior knowledge is an effective way to solve these problems. Previously, we developed a deep learning system for glaucoma diagnosis based on a large number of samples that had high sensitivity and specificity. However, it is a black box and the specific analytic methods cannot be elucidated. Here, we establish a hierarchical deep learning system based on a small number of samples that comprehensively simulates the diagnostic thinking of human experts. This system can extract the anatomical characteristics of the fundus images, including the optic disc, optic cup, and appearance of the retinal nerve fiber layer to realize automatic diagnosis of glaucoma. In addition, this system is transparent and interpretable, and the intermediate process of prediction can be visualized. Applying this system to three validation datasets of fundus images, we demonstrate performance comparable to that of human experts in diagnosing glaucoma. Moreover, it markedly improves the diagnostic accuracy of ophthalmologists. This system may expedite the screening and diagnosis of glaucoma, resulting in improved clinical outcomes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41746-021-00417-4DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7952384PMC
March 2021

Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution.

IEEE Trans Image Process 2021 24;30:3098-3112. Epub 2021 Feb 24.

Nowadays, people are getting used to taking photos to record their daily life, however, the photos are actually not consistent with the real natural scenes. The two main differences are that the photos tend to have low dynamic range (LDR) and low resolution (LR), due to the inherent imaging limitations of cameras. The multi-exposure image fusion (MEF) and image super-resolution (SR) are two widely-used techniques to address these two issues. However, they are usually treated as independent researches. In this paper, we propose a deep Coupled Feedback Network (CF-Net) to achieve MEF and SR simultaneously. Given a pair of extremely over-exposed and under-exposed LDR images with low-resolution, our CF-Net is able to generate an image with both high dynamic range (HDR) and high-resolution. Specifically, the CF-Net is composed of two coupled recursive sub-networks, with LR over-exposed and under-exposed images as inputs, respectively. Each sub-network consists of one feature extraction block (FEB), one super-resolution block (SRB) and several coupled feedback blocks (CFB). The FEB and SRB are to extract high-level features from the input LDR image, which are required to be helpful for resolution enhancement. The CFB is arranged after SRB, and its role is to absorb the learned features from the SRBs of the two sub-networks, so that it can produce a high-resolution HDR image. We have a series of CFBs in order to progressively refine the fused high-resolution HDR image. Extensive experimental results show that our CF-Net drastically outperforms other state-of-the-art methods in terms of both SR accuracy and fusion performance. The software code is available here https://github.com/ytZhang99/CF-Net.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2021.3058764DOI Listing
February 2021

Saliency Prediction on Omnidirectional Image With Generative Adversarial Imitation Learning.

IEEE Trans Image Process 2021 21;30:2087-2102. Epub 2021 Jan 21.

When watching omnidirectional images (ODIs), subjects can access different viewports by moving their heads. Therefore, it is necessary to predict subjects' head fixations on ODIs. Inspired by generative adversarial imitation learning (GAIL), this paper proposes a novel approach to predict saliency of head fixations on ODIs, named SalGAIL. First, we establish a dataset for attention on ODIs (AOI). In contrast to traditional datasets, our AOI dataset is large-scale, which contains the head fixations of 30 subjects viewing 600 ODIs. Next, we mine our AOI dataset and discover three findings: (1) the consistency of head fixations are consistent among subjects, and it grows alongside the increased subject number; (2) the head fixations exist with a front center bias (FCB); and (3) the magnitude of head movement is similar across the subjects. According to these findings, our SalGAIL approach applies deep reinforcement learning (DRL) to predict the head fixations of one subject, in which GAIL learns the reward of DRL, rather than the traditional human-designed reward. Then, multi-stream DRL is developed to yield the head fixations of different subjects, and the saliency map of an ODI is generated via convoluting predicted head fixations. Finally, experiments validate the effectiveness of our approach in predicting saliency maps of ODIs, significantly better than 11 state-of-the-art approaches. Our AOI dataset and code of SalGAIL are available online at https://github.com/yanglixiaoshen/SalGAIL.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2021.3050861DOI Listing
July 2021

Viewport-based CNN: A Multi-task Approach for Assessing 360° Video Quality.

IEEE Trans Pattern Anal Mach Intell 2020 Oct 5;PP. Epub 2020 Oct 5.

For 360° video, the existing visual quality assessment (VQA) approaches are designed based on either the whole frames or the cropped patches, ignoring the fact that subjects can only access viewports. When watching 360° video, subjects select viewports through head movement (HM) and then fixate on attractive regions within the viewports through eye movement (EM). Therefore, this paper proposes a two-staged multi-task approach for viewport-based VQA on 360° video. Specifically, we first establish a large-scale VQA dataset of 360° video, called VQA-ODV, which collects the subjective quality scores and the HM and EM data on 600 video sequences. By mining our dataset, we find that the subjective quality of 360° video is related to camera motion, viewport positions and saliency within viewports. Accordingly, we propose a viewport-based convolutional neural network (V-CNN) approach for VQA on 360° video, which has a novel multi-task architecture composed of a viewport proposal network (VP-net) and viewport quality network (VQ-net). The VP-net handles the auxiliary tasks of camera motion detection and viewport proposal, while the VQ-net accomplishes the auxiliary task of viewport saliency prediction and the main task of VQA. The experiments validate that our V-CNN approach significantly advances state-of-the-art VQA performance on 360° video and it is also effective in the three auxiliary tasks.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2020.3028509DOI Listing
October 2020

Massively parallel reporter assays of melanoma risk variants identify MX2 as a gene promoting melanoma.

Nat Commun 2020 06 1;11(1):2718. Epub 2020 Jun 1.

Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD, 20892, USA.

Genome-wide association studies (GWAS) have identified ~20 melanoma susceptibility loci, most of which are not functionally characterized. Here we report an approach integrating massively-parallel reporter assays (MPRA) with cell-type-specific epigenome and expression quantitative trait loci (eQTL) to identify susceptibility genes/variants from multiple GWAS loci. From 832 high-LD variants, we identify 39 candidate functional variants from 14 loci displaying allelic transcriptional activity, a subset of which corroborates four colocalizing melanocyte cis-eQTL genes. Among these, we further characterize the locus encompassing the HIV-1 restriction gene, MX2 (Chr21q22.3), and validate a functional intronic variant, rs398206. rs398206 mediates the binding of the transcription factor, YY1, to increase MX2 levels, consistent with the cis-eQTL of MX2 in primary human melanocytes. Melanocyte-specific expression of human MX2 in a zebrafish model demonstrates accelerated melanoma formation in a BRAF background. Our integrative approach streamlines GWAS follow-up studies and highlights a pleiotropic function of MX2 in melanoma susceptibility.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41467-020-16590-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7264232PMC
June 2020

Model-Free Distortion Rectification Framework Bridged by Distortion Distribution Map.

IEEE Trans Image Process 2020 Jan 17. Epub 2020 Jan 17.

Recently, learning-based distortion rectification schemes have shown high efficiency. However, most of these methods only focus on a specific camera model with fixed parameters, thus failing to be extended to other models. To avoid such a disadvantage, we propose a model-free distortion rectification framework for the single-shot case, bridged by the distortion distribution map (DDM). Our framework is based on an observation that the pixel-wise distortion information is mathematically regular in a distorted image, despite different models having different types and numbers of distortion parameters. Motivated by this observation, instead of estimating the heterogeneous distortion parameters, we construct a proposed distortion distribution map that intuitively indicates the global distortion features of a distorted image. In addition, we develop a dual-stream feature learning module, benefitting from both the advantages of traditional methods that leverage the local handcrafted feature and learning-based methods that focus on the global semantic feature perception. Due to the sparsity of handcrafted features, we discrete the features into a 2D point map and learn the structure inspired by PointNet. Finally, a multimodal attention fusion module is designed to attentively fuse the local structural and global semantic features, providing the hybrid features for the more reasonable scene recovery. The experimental results demonstrate the excellent generalization ability and more significant performance of our method in both quantitative and qualitative evaluations, compared with the stateof- the-art methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2020.2964523DOI Listing
January 2020

Efficient screening of novel adsorbents for coalbed methane recovery.

J Colloid Interface Sci 2020 Apr 8;565:131-141. Epub 2020 Jan 8.

School for Engineering of Matter, Transport and Energy, Arizona State University, Tempe, AZ, 85287, United States. Electronic address:

Many adsorbents with outstanding methane (CH)/nitrogen (N) separation performance are reported recently. Some may have the potential for coalbed methane (CBM) recovery to resolve the current energy crisis. However, no systematic assessment method for evaluating these adsorbents is available. This study was performed for efficient comparison and assessment of 47 novel adsorbents that are suitable for CBM recovery and to guide further adsorbent development with a three-step simulation-based method. First, the adsorbents of interest were prescreened based on the CH/N adsorption selectivity predicted from the ideal adsorption solution theory and a composite parameter S that incorporates both adsorption selectivity and working capacity. Then, the top 10 adsorbents from the prescreening step were tested in a simulated vacuum pressure swing adsorption process. The process performance of the adsorbents was evaluated by comparing their product purity, recovery and productivity at two base conditions. It was observed that Cu-MOF and NAPC-3-6 exhibited the highest product purity and OAC-1 showed the highest product recovery and productivity at the two base cases. The process performance indicators of various adsorbents were also correlated with their adsorption selectivities and capacities to investigate how these adsorption characteristics would affect the process performance. We find that the working capacities of the adsorbents are highly related with the product recovery while the adsorption selectivities are more related with the product purity. Finally, a process optimization study was performed employing the three adsorbents that exhibited the best performance in the previous evaluation. The objective of the optimization is to minimize the energy consumption of the process while meeting specified product purity (95% or 98%) and recovery rate (90%). The decision variables include the evacuation pressure, feed flow rate and adsorption pressure. The sensitivity of each variable was also examined through a parametric study. The optimization results indicate that the adsorbent selection will depend on the production scale and purity requirement. OAC-1 is the best candidate for a large scale CH production with a regular purity grade while NAPC-3-6 is a better choice for a small scale CH production with high purity requirement.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jcis.2020.01.008DOI Listing
April 2020

A hierarchical glucose-intercalated [email protected] core-shell structure as a binder-free electrode for flexible all-solid-state asymmetric supercapacitors.

Nanoscale 2020 Jan;12(3):1852-1863

Key Laboratory of Poyang Lake Environment and Resource Utilization, Ministry of Education, School of Resources Environmental & Chemical Engineering, Nanchang University, Nanchang, 330031, China.

Flexible, lightweight, and high-energy-density asymmetric supercapacitors (ASCs) are highly attractive for portable and wearable electronics. However, the implementation of such flexible ASCs is still hampered by the low specific capacitance and sluggish reaction kinetics of the electrode materials. Herein, a hierarchical core-shell structure of hybrid glucose intercalated NiMn-LDH (NiMn-G-LDH)@NiCo2S4 hollow nanotubes is deliberately constructed on flexible carbon fiber cloth (CFC). The highly conductive hollow NiCo2S4 nanotube arrays can not only provide high-speed pathways for ion and electrolyte transfer but also regulate the growth of NiMn-G-LDH nanosheets. The expanded interlayer distance on NiMn-G-LDH nanosheets could further facilitate ion diffusion and improve the rate retention. Benefiting from the rational engineering, the flexible [email protected]@CFC as a free-standing electrode could deliver a superior specific capacity of 1018 C g-1 at 1 A g-1, which is almost twice higher than that of pristine [email protected] In addition, the as-assembled flexible all-solid-state ASC device ([email protected]@CFC//AC) is capable of working at various bending angles and exhibits an impressive energy density of 60.3 W h kg-1 at a power density of 375 W kg-1, as well as a superior cycling stability of 86.4% after 10 000 cycles.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1039/c9nr09083eDOI Listing
January 2020

MFQE 2.0: A New Approach for Multi-Frame Quality Enhancement on Compressed Video.

IEEE Trans Pattern Anal Mach Intell 2021 Mar 4;43(3):949-963. Epub 2021 Feb 4.

The past few years have witnessed great success in applying deep learning to enhance the quality of compressed image/video. The existing approaches mainly focus on enhancing the quality of a single frame, not considering the similarity between consecutive frames. Since heavy fluctuation exists across compressed video frames as investigated in this paper, frame similarity can be utilized for quality enhancement of low-quality frames given their neighboring high-quality frames. This task is Multi-Frame Quality Enhancement (MFQE). Accordingly, this paper proposes an MFQE approach for compressed video, as the first attempt in this direction. In our approach, we first develop a Bidirectional Long Short-Term Memory (BiLSTM) based detector to locate Peak Quality Frames (PQFs) in compressed video. Then, a novel Multi-Frame Convolutional Neural Network (MF-CNN) is designed to enhance the quality of compressed video, in which the non-PQF and its nearest two PQFs are the input. In MF-CNN, motion between the non-PQF and PQFs is compensated by a motion compensation subnet. Subsequently, a quality enhancement subnet fuses the non-PQF and compensated PQFs, and then reduces the compression artifacts of the non-PQF. Also, PQF quality is enhanced in the same way. Finally, experiments validate the effectiveness and generalization ability of our MFQE approach in advancing the state-of-the-art quality enhancement of compressed video.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2019.2944806DOI Listing
March 2021

Development and Validation of a Deep Learning System to Detect Glaucomatous Optic Neuropathy Using Fundus Photographs.

JAMA Ophthalmol 2019 12;137(12):1353-1360

Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.

Importance: A deep learning system (DLS) that could automatically detect glaucomatous optic neuropathy (GON) with high sensitivity and specificity could expedite screening for GON.

Objective: To establish a DLS for detection of GON using retinal fundus images and glaucoma diagnosis with convoluted neural networks (GD-CNN) that has the ability to be generalized across populations.

Design, Setting, And Participants: In this cross-sectional study, a DLS for the classification of GON was developed for automated classification of GON using retinal fundus images obtained from the Chinese Glaucoma Study Alliance, the Handan Eye Study, and online databases. The researchers selected 241 032 images were selected as the training data set. The images were entered into the databases on June 9, 2009, obtained on July 11, 2018, and analyses were performed on December 15, 2018. The generalization of the DLS was tested in several validation data sets, which allowed assessment of the DLS in a clinical setting without exclusions, testing against variable image quality based on fundus photographs obtained from websites, evaluation in a population-based study that reflects a natural distribution of patients with glaucoma within the cohort and an additive data set that has a diverse ethnic distribution. An online learning system was established to transfer the trained and validated DLS to generalize the results with fundus images from new sources. To better understand the DLS decision-making process, a prediction visualization test was performed that identified regions of the fundus images utilized by the DLS for diagnosis.

Exposures: Use of a deep learning system.

Main Outcomes And Measures: Area under the receiver operating characteristics curve (AUC), sensitivity and specificity for DLS with reference to professional graders.

Results: From a total of 274 413 fundus images initially obtained from CGSA, 269 601 images passed initial image quality review and were graded for GON. A total of 241 032 images (definite GON 29 865 [12.4%], probable GON 11 046 [4.6%], unlikely GON 200 121 [83%]) from 68 013 patients were selected using random sampling to train the GD-CNN model. Validation and evaluation of the GD-CNN model was assessed using the remaining 28 569 images from CGSA. The AUC of the GD-CNN model in primary local validation data sets was 0.996 (95% CI, 0.995-0.998), with sensitivity of 96.2% and specificity of 97.7%. The most common reason for both false-negative and false-positive grading by GD-CNN (51 of 119 [46.3%] and 191 of 588 [32.3%]) and manual grading (50 of 113 [44.2%] and 183 of 538 [34.0%]) was pathologic or high myopia.

Conclusions And Relevance: Application of GD-CNN to fundus images from different settings and varying image quality demonstrated a high sensitivity, specificity, and generalizability for detecting GON. These findings suggest that automated DLS could enhance current screening programs in a cost-effective and time-efficient manner.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1001/jamaophthalmol.2019.3501DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6743057PMC
December 2019

Simultaneous Conversion of C and C Sugars into Methyl Levulinate with the Addition of 1,3,5-Trioxane.

ChemSusChem 2019 Oct 4;12(19):4400-4404. Epub 2019 Sep 4.

Key Laboratory of Biomass Chemical Engineering of Ministry of Education, College of Chemical and Biological Engineering, Zhejiang University, Hangzhou, 310027, China.

The simultaneous conversion of C and C mixed sugars into methyl levulinate (MLE) has emerged as a versatile strategy to eliminate costly separation steps. However, the traditional upgrading of C sugars into MLE is very complex as it requires both acid-catalyzed and hydrogenation processes. This study concerns the development of a one-pot, hydrogenation-free conversion of C sugars into MLE over different acid catalysts at near-critical methanol conditions with the help of 1,3,5-trioxane. For the conversion of C sugars over zeolites without the addition of 1,3,5-trioxane, the MLE yield is quite low, owing to low hydrogenation activity. The addition of 1,3,5-trioxane significantly boosts the MLE yield by providing an alternative conversion pathway that does not include the hydrogenation step. A direct comparison of the catalytic performance of five different zeolites reveals that Hβ zeolite, which has high densities of both Lewis and Brønsted acid sites, affords the highest MLE yield. With the addition of 1,3,5-trioxane, the hydroxymethylation of furfural derivative and formaldehyde is a key step. Notably, the simultaneous conversion of C and C sugars catalyzed by Hβ zeolite can attain an MLE yield as high as 50.4 % when the reaction conditions are fully optimized. Moreover, the Hβ zeolite catalyst can be reused at least five times without significant change in performance.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/cssc.201902096DOI Listing
October 2019

A Large-Scale Database and a CNN Model for Attention-Based Glaucoma Detection.

IEEE Trans Med Imaging 2020 02 8;39(2):413-424. Epub 2019 Jul 8.

Glaucoma is one of the leading causes of irreversible vision loss. Many approaches have recently been proposed for automatic glaucoma detection based on fundus images. However, none of the existing approaches can efficiently remove high redundancy in fundus images for glaucoma detection, which may reduce the reliability and accuracy of glaucoma detection. To avoid this disadvantage, this paper proposes an attention-based convolutional neural network (CNN) for glaucoma detection, called AG-CNN. Specifically, we first establish a large-scale attention-based glaucoma (LAG) database, which includes 11 760 fundus images labeled as either positive glaucoma (4878) or negative glaucoma (6882). Among the 11 760 fundus images, the attention maps of 5824 images are further obtained from ophthalmologists through a simulated eye-tracking experiment. Then, a new structure of AG-CNN is designed, including an attention prediction subnet, a pathological area localization subnet, and a glaucoma classification subnet. The attention maps are predicted in the attention prediction subnet to highlight the salient regions for glaucoma detection, under a weakly supervised training manner. In contrast to other attention-based CNN methods, the features are also visualized as the localized pathological area, which are further added in our AG-CNN structure to enhance the glaucoma detection performance. Finally, the experiment results from testing over our LAG database and another public glaucoma database show that the proposed AG-CNN approach significantly advances the state-of-the-art in glaucoma detection.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TMI.2019.2927226DOI Listing
February 2020

A Deep Learning Approach for Multi-Frame In-Loop Filter of HEVC.

IEEE Trans Image Process 2019 Nov 14;28(11):5663-5678. Epub 2019 Jun 14.

An extensive study on the in-loop filter has been proposed for a high efficiency video coding (HEVC) standard to reduce compression artifacts, thus improving coding efficiency. However, in the existing approaches, the in-loop filter is always applied to each single frame, without exploiting the content correlation among multiple frames. In this paper, we propose a multi-frame in-loop filter (MIF) for HEVC, which enhances the visual quality of each encoded frame by leveraging its adjacent frames. Specifically, we first construct a large-scale database containing encoded frames and their corresponding raw frames of a variety of content, which can be used to learn the in-loop filter in HEVC. Furthermore, we find that there usually exist a number of reference frames of higher quality and of similar content for an encoded frame. Accordingly, a reference frame selector (RFS) is designed to identify these frames. Then, a deep neural network for MIF (known as MIF-Net) is developed to enhance the quality of each encoded frame by utilizing the spatial information of this frame and the temporal information of its neighboring higher-quality frames. The MIF-Net is built on the recently developed DenseNet, benefiting from its improved generalization capacity and computational efficiency. In addition, a novel block-adaptive convolutional layer is designed and applied in the MIF-Net, for handling the artifacts influenced by coding tree unit (CTU) structure in HEVC. Extensive experiments show that our MIF approach achieves on average 11.621% saving of the Bjøntegaard delta bit-rate (BD-BR) on the standard test set, significantly outperforming the standard in-loop filter in HEVC and other state-of-the-art approaches.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2019.2921877DOI Listing
November 2019

Double Bundle versus Single Bundle Reconstruction in the Treatment of Posterior Cruciate Ligament Injury: A Prospective Comparative Study.

Indian J Orthop 2019 Mar-Apr;53(2):297-303

Center of Orthopaedics and Sport Medicine, Qingdao Municipal Hospital, School of Medicine, Qingdao University, Qingdao, China.

Background: The debate continues regarding the best way to reconstruct posterior cruciate ligament (PCL). The objective of this study was to compare the knee stability and clinical outcomes after single and double bundle (SB and DB) PCL reconstruction.

Materials And Methods: A total of 98 patients with PCL injury were enrolled for PCL reconstruction with four-strand semitendinosus and gracilis tendon autograft in the SB technique ( = 65) or two-strand Achilles allograft in the DB technique ( = 33). Each bundle fixation was achieved by the means of femoral Endo Button CL and tibial bioabsorbable interference screw. Demographic data, knee stability, and clinical outcomes were collected for analysis.

Results: The SB and DB groups showed comparable demographic data. After a minimum followup interval of 24 months, the data of 59 patients in the SB group and 30 patients in the DB group were analyzed. There was no statistical difference between the SB and DB group in terms of both knee stability and clinical outcomes ( > 0.05).

Conclusions: Compared with the SB technique, the DB technique did not exhibit any superiority in knee stability or clinical outcomes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.4103/ortho.IJOrtho_430_17DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6415566PMC
April 2019

Highly Selective and Reversible Sulfur Dioxide Adsorption on a Microporous Metal-Organic Framework via Polar Sites.

ACS Appl Mater Interfaces 2019 Mar 8;11(11):10680-10688. Epub 2019 Mar 8.

Key Laboratory of Poyang Lake Environment and Resource Utilization (Nanchang University) , Ministry of Education , Nanchang 330031 , Jiangxi , PR China.

It is very challenging to achieve efficient and deep desulfurization, especially in flue gases with an extremely low SO concentration. Herein, we report a microporous metal-organic framework (ELM-12) with specific polar sites and proper pore size for the highly efficient SO removal from flue gas and other SO-containing gases. A high SO capacity of 61.2 cm·g combined with exceptionally outstanding selectivity of SO/CO (30), SO/CH (871), and SO/N (4064) under ambient conditions (i.e., 10:90 mixture at 298 K and 1 bar) was achieved. Notably, the SO/N selectivity is unprecedented among ever reported values of porous materials. Moreover, the dispersion-corrected density functional theory calculations illustrated the superior SO capture ability and selectivity arise from the high-density SO binding sites of the CFSO group in the pore cavity (S···O interactions) and aromatic linkers in the pore walls (H···O interactions). Dynamic breakthrough experiments confirm the regeneration stability and excellent separation performance. Furthermore, ELM-12 is also stable after exposure to SO, water vapor, and organic solvents.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1021/acsami.9b01423DOI Listing
March 2019

Cell-type-specific eQTL of primary melanocytes facilitates identification of melanoma susceptibility genes.

Genome Res 2018 11 17;28(11):1621-1635. Epub 2018 Oct 17.

Laboratory of Translational Genomics, Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892, USA.

Most expression quantitative trait locus (eQTL) studies to date have been performed in heterogeneous tissues as opposed to specific cell types. To better understand the cell-type-specific regulatory landscape of human melanocytes, which give rise to melanoma but account for <5% of typical human skin biopsies, we performed an eQTL analysis in primary melanocyte cultures from 106 newborn males. We identified 597,335 -eQTL SNPs prior to linkage disequilibrium (LD) pruning and 4997 eGenes (FDR < 0.05). Melanocyte eQTLs differed considerably from those identified in the 44 GTEx tissue types, including skin. Over a third of melanocyte eGenes, including key genes in melanin synthesis pathways, were unique to melanocytes compared to those of GTEx skin tissues or TCGA melanomas. The melanocyte data set also identified -eQTLs, including those connecting a pigmentation-associated functional SNP with four genes, likely through -regulation of Melanocyte eQTLs are enriched in -regulatory signatures found in melanocytes as well as in melanoma-associated variants identified through genome-wide association studies. Melanocyte eQTLs also colocalized with melanoma GWAS variants in five known loci. Finally, a transcriptome-wide association study using melanocyte eQTLs uncovered four novel susceptibility loci, where imputed expression levels of five genes (, , , , and ) were associated with melanoma at genome-wide significant -values. Our data highlight the utility of lineage-specific eQTL resources for annotating GWAS findings, and present a robust database for genomic research of melanoma risk and melanocyte biology.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1101/gr.233304.117DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6211648PMC
November 2018

Concurrent HER or PI3K Inhibition Potentiates the Antitumor Effect of the ERK Inhibitor Ulixertinib in Preclinical Pancreatic Cancer Models.

Mol Cancer Ther 2018 10 31;17(10):2144-2155. Epub 2018 Jul 31.

Division of Oncology, Department of Internal Medicine, Washington University School of Medicine, Saint Louis, Missouri.

Effective treatment for pancreatic ductal adenocarcinoma (PDAC) is an urgent, unmet medical need. Targeting , the oncogene that is present in >95% of PDAC, is a heavily pursued strategy, but remains unsuccessful in the clinic. Therefore, targeting key effector cascades of KRAS oncoprotein, particularly the mitogenic RAF-MEK-ERK pathway, represents the next best strategy. However, RAF or MEK inhibitors have failed to show clinical efficacy in PDAC. Several studies have shown that cancer cells treated with RAF or MEK inhibitors adopt multiple mechanisms to reactivate ERK signaling. Therefore, development of ERK-specific inhibitors carries the promise to effectively abrogate this pathway. Ulixertinib (or BVD-523) is a first-in-class ERK-specific inhibitor that has demonstrated promising antitumor activity in a phase I clinical trial for advanced solid tumors with and mutations, providing a strong rationale to test this inhibitor in PDAC. In this study, we show that ulixertinib effectively inhibits growth of multiple PDAC lines and potentiates the cytotoxic effect of gemcitabine. Moreover, we found that PDAC cells treated with ulixertinib upregulates the parallel PI3K-AKT pathway through activating the HER/ErbB family proteins. Concurrent inhibition of PI3K or HER proteins synergizes with ulixertinib in suppressing PDAC cell growth and Overall, our study provides the preclinical rationale for testing combinations of ulixertinib with chemotherapy or PI3K and HER inhibitors in PDAC patients. .
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1158/1535-7163.MCT-17-1142DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6168412PMC
October 2018

Predicting Head Movement in Panoramic Video: A Deep Reinforcement Learning Approach.

IEEE Trans Pattern Anal Mach Intell 2019 11 24;41(11):2693-2708. Epub 2018 Jul 24.

Panoramic video provides immersive and interactive experience by enabling humans to control the field of view (FoV) through head movement (HM). Thus, HM plays a key role in modeling human attention on panoramic video. This paper establishes a database collecting subjects' HM in panoramic video sequences. From this database, we find that the HM data are highly consistent across subjects. Furthermore, we find that deep reinforcement learning (DRL) can be applied to predict HM positions, via maximizing the reward of imitating human HM scanpaths through the agent's actions. Based on our findings, we propose a DRL-based HM prediction (DHP) approach with offline and online versions, called offline-DHP and online-DHP. In offline-DHP, multiple DRL workflows are run to determine potential HM positions at each panoramic frame. Then, a heat map of the potential HM positions, named the HM map, is generated as the output of offline-DHP. In online-DHP, the next HM position of one subject is estimated given the currently observed HM position, which is achieved by developing a DRL algorithm upon the learned offline-DHP model. Finally, the experiments validate that our approach is effective in both offline and online prediction of HM positions for panoramic video, and that the learned offline-DHP model can improve the performance of online-DHP.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TPAMI.2018.2858783DOI Listing
November 2019

Reducing Complexity of HEVC: A Deep Learning Approach.

IEEE Trans Image Process 2018 Jun 13. Epub 2018 Jun 13.

High Efficiency Video Coding (HEVC) significantly reduces bit-rates over the preceding H.264 standard but at the expense of extremely high encoding complexity. In HEVC, the quad-tree partition of coding unit (CU) consumes a large proportion of the HEVC encoding complexity, due to the brute-force search for rate-distortion optimization (RDO). Therefore, this paper proposes a deep learning approach to predict the CU partition for reducing the HEVC complexity at both intra-and inter-modes, which is based on convolutional neural network (CNN) and long-and short-term memory (LSTM) network. First, we establish a large-scale database including substantial CU partition data for HEVC intra-and inter-modes. This enables deep learning on the CU partition. Second, we represent the CU partition of an entire coding tree unit (CTU) in the form of a hierarchical CU partition map (HCPM). Then, we propose an early-terminated hierarchical CNN (ETH-CNN) for learning to predict the HCPM. Consequently, the encoding complexity of intra-mode HEVC can be drastically reduced by replacing the brute-force search with ETH-CNN to decide the CU partition. Third, an early-terminated hierarchical LSTM (ETH-LSTM) is proposed to learn the temporal correlation of the CU partition. Then, we combine ETH-LSTM and ETH-CNN to predict the CU partition for reducing the HEVC complexity at inter-mode. Finally, experimental results show that our approach outperforms other state-of-the-art approaches in reducing the HEVC complexity at both intra-and inter-modes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2018.2847035DOI Listing
June 2018

Find who to look at: Turning from action to saliency.

IEEE Trans Image Process 2018 09 16;27(9):4529-4544. Epub 2018 May 16.

The past decade has witnessed the use of highlevel features in saliency prediction for both videos and images. Unfortunately, the existing saliency prediction methods only handle high-level static features, such as face. In fact, high-level dynamic features (also called actions), such as speaking or head turning, are also extremely attractive to visual attention in videos. Thus, in this paper, we propose a data-driven method for learning to predict the saliency of multiple-face videos, by leveraging both static and dynamic features at high-level. Specifically, we introduce an eye-tracking database, collecting the fixations of 39 subjects viewing 65 multiple-face videos. Through analysis on our database, we find a set of high-level features that cause a face to receive extensive visual attention. These high-level features include the static features of face size, center-bias and head pose, as well as the dynamic features of speaking and head turning. Then, we present the techniques for extracting these high-level features. Afterwards, a novel model, namely multiple hidden Markov model (M-HMM), is developed in our method to enable the transition of saliency among faces. In our MHMM, the saliency transition takes into account both the state of saliency at previous frames and the observed high-level features at the current frame. The experimental results show that the proposed method is superior to other state-of-the-art methods in predicting visual attention on multiple-face videos. Finally, we shed light on a promising implementation of our saliency prediction method in locating the region-of-interest (ROI), for video conference compression with high efficiency video coding (HEVC).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2018.2837106DOI Listing
September 2018

Loci associated with skin pigmentation identified in African populations.

Science 2017 11 12;358(6365). Epub 2017 Oct 12.

Translational and Functional Genomics Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD 20892, USA.

Despite the wide range of skin pigmentation in humans, little is known about its genetic basis in global populations. Examining ethnically diverse African genomes, we identify variants in or near , , , , , and that are significantly associated with skin pigmentation. Genetic evidence indicates that the light pigmentation variant at was introduced into East Africa by gene flow from non-Africans. At all other loci, variants associated with dark pigmentation in Africans are identical by descent in South Asian and Australo-Melanesian populations. Functional analyses indicate that encodes a lysosomal protein that affects melanogenesis in zebrafish and mice, and that mutations in melanocyte-specific regulatory regions near correlate with expression of ultraviolet response genes under selection in Eurasians.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1126/science.aan8433DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5759959PMC
November 2017

A common intronic variant of PARP1 confers melanoma risk and mediates melanocyte growth via regulation of MITF.

Nat Genet 2017 Sep 31;49(9):1326-1335. Epub 2017 Jul 31.

Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, Maryland, USA.

Previous genome-wide association studies have identified a melanoma-associated locus at 1q42.1 that encompasses a ∼100-kb region spanning the PARP1 gene. Expression quantitative trait locus (eQTL) analysis in multiple cell types of the melanocytic lineage consistently demonstrated that the 1q42.1 melanoma risk allele (rs3219090[G]) is correlated with higher PARP1 levels. In silico fine-mapping and functional validation identified a common intronic indel, rs144361550 (-/GGGCCC; r = 0.947 with rs3219090), as displaying allele-specific transcriptional activity. A proteomic screen identified RECQL as binding to rs144361550 in an allele-preferential manner. In human primary melanocytes, PARP1 promoted cell proliferation and rescued BRAF-induced senescence phenotypes in a PARylation-independent manner. PARP1 also transformed TERT-immortalized melanocytes expressing BRAF. PARP1-mediated senescence rescue was accompanied by transcriptional activation of the melanocyte-lineage survival oncogene MITF, highlighting a new role for PARP1 in melanomagenesis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/ng.3927DOI Listing
September 2017
-->