Publications by authors named "Nawal El-Fishawy"

4 Publications

  • Page 1 of 1

A novel deep autoencoder based survival analysis approach for microarray dataset.

PeerJ Comput Sci 2021 21;7:e492. Epub 2021 Apr 21.

Faculty of Engineering, Delta University for Science and Technology, Gamasa, Egypt.

Background: Breast cancer is one of the major causes of mortality globally. Therefore, different Machine Learning (ML) techniques were deployed for computing survival and diagnosis. Survival analysis methods are used to compute survival probability and the most important factors affecting that probability. Most survival analysis methods are used to deal with clinical features (up to hundreds), hence applying survival analysis methods like cox regression on RNAseq microarray data with many features (up to thousands) is considered a major challenge.

Methods: In this paper, a novel approach applying autoencoder to reduce the number of features is proposed. Our approach works on features reconstruction, and removal of noise within the data and features with zero variance across the samples, which facilitates extraction of features with the highest variances (across the samples) that most influence the survival probabilities. Then, it estimates the survival probability for each patient by applying random survival forests and cox regression. Applying the autoencoder on thousands of features takes a long time, thus our model is applied to the Graphical Processing Unit (GPU) in order to speed up the process. Finally, the model is evaluated and compared with the existing models on three different datasets in terms of run time, concordance index, and calibration curve, and the most related genes to survival are discovered. Finally, the biological pathways and GO molecular functions are analyzed for these significant genes.

Results: We fine-tuned our autoencoder model on RNA-seq data of three datasets to train the weights in our survival prediction model, then using different samples in each dataset for testing the model. The results show that the proposed AutoCox and AutoRandom algorithms based on our feature selection autoencoder approach have better concordance index results comparing the most recent deep learning approaches when applied to each dataset. Each gene resulting from our autoencoder model weight is computed. The weights show the degree of effect for each gene upon the survival probability. For instance, four of the most survival-related experimentally validated genes are on the top of our discovered genes weights list, including PTPRG, MYST1, BG683264, and AK094562 for the breast cancer gene expression dataset. Our approach improves the survival analysis in terms of speeding up the process, enhancing the prediction accuracy, and reducing the error rate in the estimated survival probability.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.7717/peerj-cs.492DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8080419PMC
April 2021

An efficient multi-factor authentication scheme based CNNs for securing ATMs over cognitive-IoT.

PeerJ Comput Sci 2021 2;7:e381. Epub 2021 Mar 2.

Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menouf, Menoufia, Egypt.

Nowadays, the identity verification of banks' clients at Automatic Teller Machines (ATMs) is a very critical task. Clients' money, data, and crucial information need to be highly protected. The classical ATM verification method using a combination of credit card and password has a lot of drawbacks like Burglary, robbery, expiration, and even sudden loss. Recently, iris-based security plays a vital role in the success of the Cognitive Internet of Things (C-IoT)-based security framework. The iris biometric eliminates many security issues, especially in smart IoT-based applications, principally ATMs. However, integrating an efficient iris recognition system in critical IoT environments like ATMs may involve many complex scenarios. To address these issues, this article proposes a novel efficient full authentication system for ATMs based on a bank's mobile application and a visible light environments-based iris recognition. It uses the deep Convolutional Neural Network (CNN) as a feature extractor, and a fully connected neural network (FCNN)-with Softmax layer-as a classifier. Chaotic encryption is also used to increase the security of iris template transmission over the internet. The study and evaluation of the effects of several kinds of noisy iris images, due to noise interference related to sensing IoT devices, bad acquisition of iris images by ATMs, and any other system attacks. Experimental results show highly competitive and satisfying results regards to accuracy of recognition rate and training time. The model has a low degradation of recognition accuracy rates in the case of using noisy iris images. Moreover, the proposed methodology has a relatively low training time, which is a useful parameter in a lot of critical IoT based applications, especially ATMs in banking systems.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.7717/peerj-cs.381DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7959630PMC
March 2021

Coronavirus disease 2019 (COVID-19): survival analysis using deep learning and Cox regression model.

Pattern Anal Appl 2021 Feb 15:1-13. Epub 2021 Feb 15.

Faculty of Engineering, Delta University for Science and Technology, Gamasa, Egypt.

Coronavirus (COVID-19) is one of the most serious problems that has caused stopping the wheel of life all over the world. It is widely spread to the extent that hospital places are not available for all patients. Therefore, most hospitals accept patients whose recovery rate is high. Machine learning techniques and artificial intelligence have been deployed for computing infection risks, performing survival analysis and classification. Survival analysis (time-to-event analysis) is widely used in many areas such as engineering and medicine. This paper presents two systems, Cox_COVID_19 and Deep_ Cox_COVID_19 that are based on Cox regression to study the survival analysis for COVID-19 and help hospitals to choose patients with better chances of survival and predict the most important symptoms (features) affecting survival probability. Cox_COVID_19 is based on Cox regression and Deep_Cox_COVID_19 is a combination of autoencoder deep neural network and Cox regression to enhance prediction accuracy. A clinical dataset for COVID-19 patients is used. This dataset consists of 1085 patients. The results show that applying an autoencoder on the data to reconstruct features, before applying Cox regression algorithm, would improve the results by increasing concordance, accuracy and precision. For Deep_ Cox_COVID_19 system, it has a concordance of 0.983 for training and 0.999 for testing, but for Cox_COVID_19 system, it has a concordance of 0.923 for training and 0.896 for testing. The most important features affecting mortality are, age, muscle pain, pneumonia and throat pain. Both Cox_COVID_19 and Deep_ Cox_COVID_19 prediction systems can predict the survival probability and present significant symptoms (features) that differentiate severe cases and death cases. But the accuracy of Deep_Cox_Covid_19 outperforms that of Cox_Covid_19. Both systems can provide definite information for doctors about detection and intervention to be taken, which can reduce mortality.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10044-021-00958-0DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7883884PMC
February 2021

Intelligence Is beyond Learning: A Context-Aware Artificial Intelligent System for Video Understanding.

Comput Intell Neurosci 2020 23;2020:8813089. Epub 2020 Dec 23.

Computer Science and Engineering Department, Faculty of Electronic Engineering, Menoufia University, Shibin El Kom, Menofia Governorate, Egypt.

Understanding video files is a challenging task. While the current video understanding techniques rely on deep learning, the obtained results suffer from a lack of real trustful meaning. Deep learning recognizes patterns from big data, leading to deep feature abstraction, not deep understanding. Deep learning tries to understand multimedia production by analyzing its content. We cannot understand the semantics of a multimedia file by analyzing its content only. Events occurring in a scene earn their meanings from the context containing them. A screaming kid could be scared of a threat or surprised by a lovely gift or just playing in the backyard. Artificial intelligence is a heterogeneous process that goes beyond learning. In this article, we discuss the heterogeneity of AI as a process that includes innate knowledge, approximations, and context awareness. We present a context-aware video understanding technique that makes the machine intelligent enough to understand the message behind the video stream. The main purpose is to understand the video stream by extracting real meaningful concepts, emotions, temporal data, and spatial data from the video context. The diffusion of heterogeneous data patterns from the video context leads to accurate decision-making about the video message and outperforms systems that rely on deep learning. Objective and subjective comparisons prove the accuracy of the concepts extracted by the proposed context-aware technique in comparison with the current deep learning video understanding techniques. Both systems are compared in terms of retrieval time, computing time, data size consumption, and complexity analysis. Comparisons show a significant efficient resource usage of the proposed context-aware system, which makes it a suitable solution for real-time scenarios. Moreover, we discuss the pros and cons of deep learning architectures.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1155/2020/8813089DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7775170PMC
December 2020