Publications by authors named "Tanzila Saba"

55 Publications

Skin cancer detection from dermoscopic images using deep learning and fuzzy k-means clustering.

Microsc Res Tech 2021 Aug 27. Epub 2021 Aug 27.

Artificial Intelligence & Data Analytics (AIDA) Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia.

Melanoma skin cancer is the most life-threatening and fatal disease among the family of skin cancer diseases. Modern technological developments and research methodologies made it possible to detect and identify this kind of skin cancer more effectively; however, the automated localization and segmentation of skin lesion at earlier stages is still a challenging task due to the low contrast between melanoma moles and skin portion and a higher level of color similarity between melanoma-affected and -nonaffected areas. In this paper, we present a fully automated method for segmenting the skin melanoma at its earliest stage by employing a deep-learning-based approach, namely faster region-based convolutional neural networks (RCNN) along with fuzzy k-means clustering (FKM). Several clinical images are utilized to test the presented method so that it may help the dermatologist in diagnosing this life-threatening disease at its earliest stage. The presented method first preprocesses the dataset images to remove the noise and illumination problems and enhance the visual information before applying the faster-RCNN to obtain the feature vector of fixed length. After that, FKM has been employed to segment the melanoma-affected portion of skin with variable size and boundaries. The performance of the presented method is evaluated on the three standard datasets, namely ISBI-2016, ISIC-2017, and PH2, and the results show that the presented method outperforms the state-of-the-art approaches. The presented method attains an average accuracy of 95.40, 93.1, and 95.6% on the ISIC-2016, ISIC-2017, and PH2 datasets, respectively, which is showing its robustness to skin lesion recognition and segmentation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23908DOI Listing
August 2021

Microscopic segmentation and classification of COVID-19 infection with ensemble convolutional neural network.

Microsc Res Tech 2021 Aug 26. Epub 2021 Aug 26.

Department of Computer Science, University of Wah, Wah Cantt, Pakistan.

The detection of biological RNA from sputum has a comparatively poor positive rate in the initial/early stages of discovering COVID-19, as per the World Health Organization. It has a different morphological structure as compared to healthy images, manifested by computer tomography (CT). COVID-19 diagnosis at an early stage can aid in the timely cure of patients, lowering the mortality rate. In this reported research, three-phase model is proposed for COVID-19 detection. In Phase I, noise is removed from CT images using a denoise convolutional neural network (DnCNN). In the Phase II, the actual lesion region is segmented from the enhanced CT images by using deeplabv3 and ResNet-18. In Phase III, segmented images are passed to the stack sparse autoencoder (SSAE) deep learning model having two stack auto-encoders (SAE) with the selected hidden layers. The designed SSAE model is based on both SAE and softmax layers for COVID19 classification. The proposed method is evaluated on actual patient data of Pakistan Ordinance Factories and other public benchmark data sets with different scanners/mediums. The proposed method achieved global segmentation accuracy of 0.96 and 0.97 for classification.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23913DOI Listing
August 2021

Automatic detection of papilledema through fundus retinal images using deep learning.

Microsc Res Tech 2021 Jul 8. Epub 2021 Jul 8.

MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia.

Papilledema is a syndrome of the retina in which retinal optic nerve is inflated by elevation of intracranial pressure. The papilledema abnormalities such as retinal nerve fiber layer (RNFL) opacification may lead to blindness. These abnormalities could be seen through capturing of retinal images by means of fundus camera. This paper presents a deep learning-based automated system that detects and grades the papilledema through U-Net and Dense-Net architectures. The proposed approach has two main stages. First, optic disc and its surrounding area in fundus retinal image are localized and cropped for input to Dense-Net which classifies the optic disc as papilledema or normal. Second, consists of preprocessing of Dense-Net classified papilledema fundus image by Gabor filter. The preprocessed papilledema image is input to U-Net to achieve the segmented vascular network from which the vessel discontinuity index (VDI) and vessel discontinuity index to disc proximity (VDIP) are calculated for grading of papilledema. The VDI and VDIP are standard parameter to check the severity and grading of papilledema. The proposed system is evaluated on 60 papilledema and 40 normal fundus images taken from STARE dataset. The experimental results for classification of papilledema through Dense-Net are much better in terms of sensitivity 98.63%, specificity 97.83%, and accuracy 99.17%. Similarly, the grading results for mild and severe papilledema classification through U-Net are also much better in terms of sensitivity 99.82%, specificity 98.65%, and accuracy 99.89%. The deep learning-based automated detection and grading of papilledema for clinical purposes is first effort in state of art.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23865DOI Listing
July 2021

An intelligence design for detection and classification of COVID19 using fusion of classical and convolutional neural network and improved microscopic features selection approach.

Microsc Res Tech 2021 Oct 8;84(10):2254-2267. Epub 2021 May 8.

College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia.

Coronavirus19 is caused due to infection in the respiratory system. It is the type of RNA virus that might infect animal and human species. In the severe stage, it causes pneumonia in human beings. In this research, hand-crafted and deep microscopic features are used to classify lung infection. The proposed work consists of two phases; in phase I, infected lung region is segmented using proposed U-Net deep learning model. The hand-crafted features are extracted such as histogram orientation gradient (HOG), noise to the harmonic ratio (NHr), and segmentation based fractal texture analysis (SFTA) from the segmented image, and optimum features are selected from each feature vector using entropy. In phase II, local binary patterns (LBPs), speeded up robust feature (Surf), and deep learning features are extracted using a pretrained network such as inceptionv3, ResNet101 from the input CT images, and select optimum features based on entropy. Finally, the optimum selected features using entropy are fused in two ways, (i) The hand-crafted features (HOG, NHr, SFTA, LBP, SURF) are horizontally concatenated/fused (ii) The hand-crafted features (HOG, NHr, SFTA, LBP, SURF) are combined/fused with deep features. The fused optimum features vector is passed to the ensemble models (Boosted tree, bagged tree, and RUSBoosted tree) in two ways for the COVID19 classification, (i) classification using fused hand-crafted features (ii) classification using fusion of hand-crafted features and deep features. The proposed methodology is tested /evaluated on three benchmark datasets. Two datasets employed for experiments and results show that hand-crafted & deep microscopic feature's fusion provide better results compared to only hand-crafted fused features.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23779DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8237066PMC
October 2021

Viral reverse engineering using Artificial Intelligence and big data COVID-19 infection with Long Short-term Memory (LSTM).

Environ Technol Innov 2021 May 2;22:101531. Epub 2021 Apr 2.

Department of Medicine, University of Liverpool, Liverpool, UK.

This research presents a reverse engineering approach to discover the patterns and evolution behavior of SARS-CoV-2 using AI and big data. Accordingly, we have studied five viral families (, , , , and ) that happened in the era of the past one hundred years. To capture the similarities, common characteristics, and evolution behavior for prediction concerning SARS-CoV-2. And how reverse engineering using Artificial intelligence (AI) and big data is efficient and provides wide horizons. The results show that SARS-CoV-2 shares the same highest active amino acids (, , and ) with the mentioned viral families. As known, that affects the building function of the proteins. We have also devised a mathematical formula representing how we calculate the evolution difference percentage between each virus concerning its phylogenic tree. It shows that SARS-CoV-2 has fast mutation evolution concerning its time of arising. Artificial Intelligence (AI) is used to predict the next evolved instance of SARS-CoV-2 by utilizing the phylogenic tree data as a corpus using Long Short-term Memory (LSTM). This paper has shown the evolved viral instance prediction process on protein from SARS-CoV-2 as the first stage to predict the complete mutant virus. Finally, in this research, we have focused on analyzing the virus to its primary factors by reverse engineering using AI and big data to understand the viral similarities, patterns, and evolution behavior to predict future viral mutations of the virus artificially in a systematic and logical way.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.eti.2021.101531DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8016547PMC
May 2021

Machine learning techniques to detect and forecast the daily total COVID-19 infected and deaths cases under different lockdown types.

Microsc Res Tech 2021 Jul 1;84(7):1462-1474. Epub 2021 Feb 1.

Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia.

COVID-19 has impacted the world in many ways, including loss of lives, economic downturn and social isolation. COVID-19 was emerged due to the SARS-CoV-2 that is highly infectious pandemic. Every country tried to control the COVID-19 spread by imposing different types of lockdowns. Therefore, there is an urgent need to forecast the daily confirmed infected cases and deaths in different types of lockdown to select the most appropriate lockdown strategies to control the intensity of this pandemic and reduce the burden in hospitals. Currently are imposed three types of lockdown (partial, herd, complete) in different countries. In this study, three countries from every type of lockdown were studied by applying time-series and machine learning models, named as random forests, K-nearest neighbors, SVM, decision trees (DTs), polynomial regression, Holt winter, ARIMA, and SARIMA to forecast daily confirm infected cases and deaths due to COVID-19. The models' accuracy and effectiveness were evaluated by error based on three performance criteria. Actually, a single forecasting model could not capture all data sets' trends due to the varying nature of data sets and lockdown types. Three top-ranked models were used to predict the confirmed infected cases and deaths, the outperformed models were also adopted for the out-of-sample prediction and obtained very close results to the actual values of cumulative infected cases and deaths due to COVID-19. This study has proposed the auspicious models for forecasting and the best lockdown strategy to mitigate the causalities of COVID-19.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23702DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8014446PMC
July 2021

Prediction of COVID-19 - Pneumonia based on Selected Deep Features and One Class Kernel Extreme Learning Machine.

Comput Electr Eng 2021 Mar 30;90:106960. Epub 2020 Dec 30.

College of Computer and Information Sciences, Prince Sultan University, SA.

In this work, we propose a deep learning framework for the classification of COVID-19 pneumonia infection from normal chest CT scans. In this regard, a 15-layered convolutional neural network architecture is developed which extracts deep features from the selected image samples - collected from the Radiopeadia. Deep features are collected from two different layers, global average pool and fully connected layers, which are later combined using the max-layer detail (MLD) approach. Subsequently, a Correntropy technique is embedded in the main design to select the most discriminant features from the pool of features. One-class kernel extreme learning machine classifier is utilized for the final classification to achieving an average accuracy of 95.1%, and the sensitivity, specificity & precision rate of 95.1%, 95%, & 94% respectively. To further verify our claims, detailed statistical analyses based on standard error mean (SEM) is also provided, which proves the effectiveness of our proposed prediction design.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.compeleceng.2020.106960DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7832028PMC
March 2021

Brain tumor detection and multi-classification using advanced deep learning techniques.

Microsc Res Tech 2021 Jun 5;84(6):1296-1308. Epub 2021 Jan 5.

School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China.

A brain tumor is an uncontrolled development of brain cells in brain cancer if not detected at an early stage. Early brain tumor diagnosis plays a crucial role in treatment planning and patients' survival rate. There are distinct forms, properties, and therapies of brain tumors. Therefore, manual brain tumor detection is complicated, time-consuming, and vulnerable to error. Hence, automated computer-assisted diagnosis at high precision is currently in demand. This article presents segmentation through Unet architecture with ResNet50 as a backbone on the Figshare data set and achieved a level of 0.9504 of the intersection over union (IoU). The preprocessing and data augmentation concept were introduced to enhance the classification rate. The multi-classification of brain tumors is performed using evolutionary algorithms and reinforcement learning through transfer learning. Other deep learning methods such as ResNet50, DenseNet201, MobileNet V2, and InceptionV3 are also applied. Results thus obtained exhibited that the proposed research framework performed better than reported in state of the art. Different CNN, models applied for tumor classification such as MobileNet V2, Inception V3, ResNet50, DenseNet201, NASNet and attained accuracy 91.8, 92.8, 92.9, 93.1, 99.6%, respectively. However, NASNet exhibited the highest accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23688DOI Listing
June 2021

Computer vision for microscopic skin cancer diagnosis using handcrafted and non-handcrafted features.

Authors:
Tanzila Saba

Microsc Res Tech 2021 Jun 5;84(6):1272-1283. Epub 2021 Jan 5.

Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia.

Skin covers the entire body and is the largest organ. Skin cancer is one of the most dreadful cancers that is primarily triggered by sensitivity to ultraviolet rays from the sun. However, the riskiest is melanoma, although it starts in a few different ways. The patient is extremely unaware of recognizing skin malignant growth at the initial stage. Literature is evident that various handcrafted and automatic deep learning features are employed to diagnose skin cancer using the traditional machine and deep learning techniques. The current research presents a comparison of skin cancer diagnosis techniques using handcrafted and non-handcrafted features. Additionally, clinical features such as Menzies method, seven-point detection, asymmetry, border color and diameter, visual textures (GRC), local binary patterns, Gabor filters, random fields of Markov, fractal dimension, and an oriental histography are also explored in the process of skin cancer detection. Several parameters, such as jacquard index, accuracy, dice efficiency, preciseness, sensitivity, and specificity, are compared on benchmark data sets to assess reported techniques. Finally, publicly available skin cancer data sets are described and the remaining issues are highlighted.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23686DOI Listing
June 2021

Melanoma Detection and Classification using Computerized Analysis of Dermoscopic Systems: A Review.

Curr Med Imaging 2020 ;16(7):794-822

Department of Computer Science and Engineering, HITEC University, Museum Road, Taxila, Pakistan.

Malignant melanoma is considered as one of the most deadly cancers, which has broadly increased worldwide since the last decade. In 2018, around 91,270 cases of melanoma were reported and 9,320 people died in the US. However, diagnosis at the initial stage indicates a high survival rate. The conventional diagnostic methods are expensive, inconvenient and subject to the dermatologist's expertise as well as a highly equipped environment. Recent achievements in computerized based systems are highly promising with improved accuracy and efficiency. Several measures such as irregularity, contrast stretching, change in origin, feature extraction and feature selection are considered for accurate melanoma detection and classification. Typically, digital dermoscopy comprises four fundamental image processing steps including preprocessing, segmentation, feature extraction and reduction, and lesion classification. Our survey is compared with the existing surveys in terms of preprocessing techniques (hair removal, contrast stretching) and their challenges, lesion segmentation methods, feature extraction methods with their challenges, features selection techniques, datasets for the validation of the digital system, classification methods and performance measure. Also, a brief summary of each step is presented in the tables. The challenges for each step are also described in detail, which clearly indicate why the digital systems are not performing well. Future directions are also given in this survey.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2174/1573405615666191223122401DOI Listing
July 2021

Microscopic brain tumor detection and classification using 3D CNN and feature selection architecture.

Microsc Res Tech 2021 Jan 21;84(1):133-149. Epub 2020 Sep 21.

School of Clinical Medicine, Zhengzhou University, Zhengzhou, China.

Brain tumor is one of the most dreadful natures of cancer and caused a huge number of deaths among kids and adults from the past few years. According to WHO standard, the 700,000 humans are being with a brain tumor and around 86,000 are diagnosed since 2019. While the total number of deaths due to brain tumors is 16,830 since 2019 and the average survival rate is 35%. Therefore, automated techniques are needed to grade brain tumors precisely from MRI scans. In this work, a new deep learning-based method is proposed for microscopic brain tumor detection and tumor type classification. A 3D convolutional neural network (CNN) architecture is designed at the first step to extract brain tumor and extracted tumors are passed to a pretrained CNN model for feature extraction. The extracted features are transferred to the correlation-based selection method and as the output, the best features are selected. These selected features are validated through feed-forward neural network for final classification. Three BraTS datasets 2015, 2017, and 2018 are utilized for experiments, validation, and accomplished an accuracy of 98.32, 96.97, and 92.67%, respectively. A comparison with existing techniques shows the proposed design yields comparable accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23597DOI Listing
January 2021

Recent advancement in cancer detection using machine learning: Systematic survey of decades, comparisons and challenges.

Authors:
Tanzila Saba

J Infect Public Health 2020 Sep 2;13(9):1274-1289. Epub 2020 Aug 2.

College of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia. Electronic address:

Cancer is a fatal illness often caused by genetic disorder aggregation and a variety of pathological changes. Cancerous cells are abnormal areas often growing in any part of human body that are life-threatening. Cancer also known as tumor must be quickly and correctly detected in the initial stage to identify what might be beneficial for its cure. Even though modality has different considerations, such as complicated history, improper diagnostics and treatement that are main causes of deaths. The aim of the research is to analyze, review, categorize and address the current developments of human body cancer detection using machine learning techniques for breast, brain, lung, liver, skin cancer leukemia. The study highlights how cancer diagnosis, cure process is assisted using machine learning with supervised, unsupervised and deep learning techniques. Several state of art techniques are categorized under the same cluster and results are compared on benchmark datasets from accuracy, sensitivity, specificity, false-positive metrics. Finally, challenges are also highlighted for possible future work.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jiph.2020.06.033DOI Listing
September 2020

Secure and energy-efficient framework using Internet of Medical Things for e-healthcare.

J Infect Public Health 2020 Oct 15;13(10):1567-1575. Epub 2020 Jul 15.

Artificial Intelligence & Data Analytics Lab (AIDA) CCIS Prince Sultan University Riyadh, 11586, Saudi Arabia. Electronic address:

In various fields, the internet of things (IoT) gains a lot of popularity due to its autonomous sensors operations with the least cost. In medical and healthcare applications, the IoT devices develop an ecosystem to sense the medical conditions of the patients' such as blood pressure, oxygen level, heartbeat, temperature, etc. and take appropriate actions on an emergency basis. Using it, the healthcare-related data of patients is transmitted towards the remote users and medical centers for post-analysis. Different solutions have been proposed using Wireless Body Area Network (WBAN) to monitor the medical status of the patients based on low powered biosensor nodes, however, preventing increased energy consumption and communication costs are demanding and interesting problems. The issue of unbalanced energy consumption between biosensor nodes degrades the timely delivery of the patient's information to remote centers and gives a negative impact on the medical system. Moreover, the sensitive data of the patient is transmitting over the insecure Internet and prone to vulnerable security threats. Therefore, data privacy and integrity from malicious traffic are another challenging research issue for medical applications. This research article aims to a proposed secure and energy-efficient framework using Internet of Medical Things (IoMT) for e-healthcare (SEF-IoMT), which primary objective is to decrease the communication overhead and energy consumption between biosensors while transmitting the healthcare data on a convenient manner, and the other hand, it also secures the medical data of the patients against unauthentic and malicious nodes to improve the network privacy and integrity. The simulated results exhibit that the proposed framework improves the performance of medical systems for network throughput by 18%, packets loss rate by 44%, end-to-end delay by 26%, energy consumption by 29%, and link breaches by 48% than other states of the art solutions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jiph.2020.06.027DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7362861PMC
October 2020

Gastric Tract Infections Detection and Classification from Wireless Capsule Endoscopy using Computer Vision Techniques: A Review.

Curr Med Imaging 2020 ;16(10):1229-1242

Department of Electrical & Computer Engineering, National University of Science & Technology, Muscat, Oman.

Recent facts and figures published in various studies in the US show that approximately 27,510 new cases of gastric infections are diagnosed. Furthermore, it has also been reported that the mortality rate is quite high in diagnosed cases. The early detection of these infections can save precious human lives. As the manual process of these infections is time-consuming and expensive, therefore automated Computer-Aided Diagnosis (CAD) systems are required which helps the endoscopy specialists in their clinics. Generally, an automated method of gastric infection detections using Wireless Capsule Endoscopy (WCE) is comprised of the following steps such as contrast preprocessing, feature extraction, segmentation of infected regions, and classification into their relevant categories. These steps consist of various challenges that reduce the detection and recognition accuracy as well as increase the computation time. In this review, authors have focused on the importance of WCE in medical imaging, the role of endoscopy for bleeding-related infections, and the scope of endoscopy. Further, the general steps and highlighting the importance of each step have been presented. A detailed discussion and future directions have been provided at the end.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2174/1573405616666200425220513DOI Listing
January 2020

Detecting Pneumonia using Convolutions and Dynamic Capsule Routing for Chest X-ray Images.

Sensors (Basel) 2020 Feb 15;20(4). Epub 2020 Feb 15.

PRT2L, Washington University in St. Louis, Saint Louis, MO 63110, USA.

An entity's existence in an image can be depicted by the activity instantiation vector from a group of neurons (called capsule). Recently, multi-layered capsules, called CapsNet, have proven to be state-of-the-art for image classification tasks. This research utilizes the prowess of this algorithm to detect pneumonia from chest X-ray (CXR) images. Here, an entity in the CXR image can help determine if the patient (whose CXR is used) is suffering from pneumonia or not. A simple model of capsules (also known as Simple CapsNet) has provided results comparable to best Deep Learning models that had been used earlier. Subsequently, a combination of convolutions and capsules is used to obtain two models that outperform all models previously proposed. These models-Integration of convolutions with capsules (ICC) and Ensemble of convolutions with capsules (ECC)-detect pneumonia with a test accuracy of 95.33% and 95.90%, respectively. The latter model is studied in detail to obtain a variant called EnCC, where n = 3, 4, 8, 16. Here, the E4CC model works optimally and gives test accuracy of 96.36%. All these models had been trained, validated, and tested on 5857 images from Mendeley.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/s20041068DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7070644PMC
February 2020

Microscopic melanoma detection and classification: A framework of pixel-based fusion and multilevel features reduction.

Microsc Res Tech 2020 Apr 3;83(4):410-423. Epub 2020 Jan 3.

Department of Computer Engineering, Umm Al-Qura University, Makkah, Saudi Arabia.

The numbers of diagnosed patients by melanoma are drastic and contribute more deaths annually among young peoples. An approximately 192,310 new cases of skin cancer are diagnosed in 2019, which shows the importance of automated systems for the diagnosis process. Accordingly, this article presents an automated method for skin lesions detection and recognition using pixel-based seed segmented images fusion and multilevel features reduction. The proposed method involves four key steps: (a) mean-based function is implemented and fed input to top-hat and bottom-hat filters which later fused for contrast stretching, (b) seed region growing and graph-cut method-based lesion segmentation and fused both segmented lesions through pixel-based fusion, (c) multilevel features such as histogram oriented gradient (HOG), speeded up robust features (SURF), and color are extracted and simple concatenation is performed, and (d) finally variance precise entropy-based features reduction and classification through SVM via cubic kernel function. Two different experiments are performed for the evaluation of this method. The segmentation performance is evaluated on PH2, ISBI2016, and ISIC2017 with an accuracy of 95.86, 94.79, and 94.92%, respectively. The classification performance is evaluated on PH2 and ISBI2016 dataset with an accuracy of 98.20 and 95.42%, respectively. The results of the proposed automated systems are outstanding as compared to the current techniques reported in state of art, which demonstrate the validity of the proposed method.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23429DOI Listing
April 2020

A Deep Learning Approach for Automated Diagnosis and Multi-Class Classification of Alzheimer's Disease Stages Using Resting-State fMRI and Residual Neural Networks.

J Med Syst 2019 Dec 18;44(2):37. Epub 2019 Dec 18.

Department of Computer Engineering, University of Engineering and Technology, Taxila, 47050, Pakistan.

Alzheimer's disease (AD) is an incurable neurodegenerative disorder accounting for 70%-80% dementia cases worldwide. Although, research on AD has increased in recent years, however, the complexity associated with brain structure and functions makes the early diagnosis of this disease a challenging task. Resting-state functional magnetic resonance imaging (rs-fMRI) is a neuroimaging technology that has been widely used to study the pathogenesis of neurodegenerative diseases. In literature, the computer-aided diagnosis of AD is limited to binary classification or diagnosis of AD and MCI stages. However, its applicability to diagnose multiple progressive stages of AD is relatively under-studied. This study explores the effectiveness of rs-fMRI for multi-class classification of AD and its associated stages including CN, SMC, EMCI, MCI, LMCI, and AD. A longitudinal cohort of resting-state fMRI of 138 subjects (25 CN, 25 SMC, 25 EMCI, 25 LMCI, 13 MCI, and 25 AD) from Alzheimer's Disease Neuroimaging Initiative (ADNI) is studied. To provide a better insight into deep learning approaches and their applications to AD classification, we investigate ResNet-18 architecture in detail. We consider the training of the network from scratch by using single-channel input as well as performed transfer learning with and without fine-tuning using an extended network architecture. We experimented with residual neural networks to perform AD classification task and compared it with former research in this domain. The performance of the models is evaluated using precision, recall, f1-measure, AUC and ROC curves. We found that our networks were able to significantly classify the subjects. We achieved improved results with our fine-tuned model for all the AD stages with an accuracy of 100%, 96.85%, 97.38%, 97.43%, 97.40% and 98.01% for CN, SMC, EMCI, LMCI, MCI, and AD respectively. However, in terms of overall performance, we achieved state-of-the-art results with an average accuracy of 97.92% and 97.88% for off-the-shelf and fine-tuned models respectively. The Analysis of results indicate that classification and prediction of neurodegenerative brain disorders such as AD using functional magnetic resonance imaging and advanced deep learning methods is promising for clinical decision making and have the potential to assist in early diagnosis of AD and its associated stages.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10916-019-1475-2DOI Listing
December 2019

An automated nuclei segmentation of leukocytes from microscopic digital images.

Pak J Pharm Sci 2019 Sep;32(5):2123-2138

Department of Computer Science, Bahauddin Zakariya University, Multan, Pakistan.

Leukemia is a life-threatening disease. So far diagnosing of leukemia is manually carried out by the Hematologists that is time-consuming and error-prone. The crucial problem is leukocytes' nuclei segmentation precisely. This paper presents a novel technique to solve the problem by applying statistical methods of Gaussian mixture model through expectation maximization for the basic and challenging step of leukocytes' nuclei segmentation. The proposed technique is being tested on a set of 365 images and the segmentation results are validated both qualitatively and quantitatively with current state-of-the-art methods on the basis of ground truth data (manually marked images by medical experts). The proposed technique is qualitatively compared with current state-of-the-art methods on the basis of ground truth data through visual inspection on four different grounds. Finally, the proposed technique quantitatively achieved an overall segmentation accuracy, sensitivity and precision of 92.8%, 93.5% and 98.16% respectively while an overall F-measure of 95.75%.
View Article and Find Full Text PDF

Download full-text PDF

Source
September 2019

Lung Nodule Detection based on Ensemble of Hand Crafted and Deep Features.

J Med Syst 2019 Nov 8;43(12):332. Epub 2019 Nov 8.

Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad, Pakistan.

Lung cancer is considered as a deadliest disease worldwide due to which 1.76 million deaths occurred in the year 2018. Keeping in view its dreadful effect on humans, cancer detection at a premature stage is a more significant requirement to reduce the probability of mortality rate. This manuscript depicts an approach of finding lung nodule at an initial stage that comprises of three major phases: (1) lung nodule segmentation using Otsu threshold followed by morphological operation; (2) extraction of geometrical, texture and deep learning features for selecting optimal features; (3) The optimal features are fused serially for classification of lung nodule into two categories that is malignant and benign. The lung image database consortium image database resource initiative (LIDC-IDRI) is used for experimentation. The experimental outcomes show better performance of presented approach as compared with the existing methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10916-019-1455-6DOI Listing
November 2019

A New Approach for Brain Tumor Segmentation and Classification Based on Score Level Fusion Using Transfer Learning.

J Med Syst 2019 Oct 23;43(11):326. Epub 2019 Oct 23.

Department of Electronics and Communication Engineering, Sahyadri College of Engineering & Management, Mangaluru, India.

Brain tumor is one of the most death defying diseases nowadays. The tumor contains a cluster of abnormal cells grouped around the inner portion of human brain. It affects the brain by squeezing/ damaging healthy tissues. It also amplifies intra cranial pressure and as a result tumor cells growth increases rapidly which may lead to death. It is, therefore desirable to diagnose/ detect brain tumor at an early stage that may increase the patient survival rate. The major objective of this research work is to present a new technique for the detection of tumor. The proposed architecture accurately segments and classifies the benign and malignant tumor cases. Different spatial domain methods are applied to enhance and accurately segment the input images. Moreover Alex and Google networks are utilized for classification in which two score vectors are obtained after the softmax layer. Further, both score vectors are fused and supplied to multiple classifiers along with softmax layer. Evaluation of proposed model is done on top medical image computing and computer-assisted intervention (MICCAI) challenge datasets i.e., multimodal brain tumor segmentation (BRATS) 2013, 2014, 2015, 2016 and ischemic stroke lesion segmentation (ISLES) 2018 respectively.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10916-019-1453-8DOI Listing
October 2019

Modeling user rating preference behavior to improve the performance of the collaborative filtering based recommender systems.

PLoS One 2019 1;14(8):e0220129. Epub 2019 Aug 1.

Information Systems Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia.

One of the main concerns for online shopping websites is to provide efficient and customized recommendations to a very large number of users based on their preferences. Collaborative filtering (CF) is the most famous type of recommender system method to provide personalized recommendations to users. CF generates recommendations by identifying clusters of similar users or items from the user-item rating matrix. This cluster of similar users or items is generally identified by using some similarity measurement method. Among numerous proposed similarity measure methods by researchers, the Pearson correlation coefficient (PCC) is a commonly used similarity measure method for CF-based recommender systems. The standard PCC suffers some inherent limitations and ignores user rating preference behavior (RPB). Typically, users have different RPB, where some users may give the same rating to various items without liking the items and some users may tend to give average rating albeit liking the items. Traditional similarity measure methods (including PCC) do not consider this rating pattern of users. In this article, we present a novel similarity measure method to consider user RPB while calculating similarity among users. The proposed similarity measure method state user RPB as a function of user average rating value, and variance or standard deviation. The user RPB is then combined with an improved model of standard PCC to form an improved similarity measure method for CF-based recommender systems. The proposed similarity measure is named as improved PCC weighted with RPB (IPWR). The qualitative and quantitative analysis of the IPWR similarity measure method is performed using five state-of-the-art datasets (i.e. Epinions, MovieLens-100K, MovieLens-1M, CiaoDVD, and MovieTweetings). The IPWR similarity measure method performs better than state-of-the-art similarity measure methods in terms of mean absolute error (MAE), root mean square error (RMSE), precision, recall, and F-measure.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0220129PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6675073PMC
March 2020

Region Extraction and Classification of Skin Cancer: A Heterogeneous framework of Deep CNN Features Fusion and Reduction.

J Med Syst 2019 Jul 20;43(9):289. Epub 2019 Jul 20.

College of Computer and Information Sciences, Prince Sultan University, Riyadh, 11586, Saudi Arabia.

Cancer is one of the leading causes of deaths in the last two decades. It is either diagnosed malignant or benign - depending upon the severity of the infection and the current stage. The conventional methods require a detailed physical inspection by an expert dermatologist, which is time-consuming and imprecise. Therefore, several computer vision methods are introduced lately, which are cost-effective and somewhat accurate. In this work, we propose a new automated approach for skin lesion detection and recognition using a deep convolutional neural network (DCNN). The proposed cascaded design incorporates three fundamental steps including; a) contrast enhancement through fast local Laplacian filtering (FlLpF) along HSV color transformation; b) lesion boundary extraction using color CNN approach by following XOR operation; c) in-depth features extraction by applying transfer learning using Inception V3 model prior to feature fusion using hamming distance (HD) approach. An entropy controlled feature selection method is also introduced for the selection of the most discriminant features. The proposed method is tested on PH2 and ISIC 2017 datasets, whereas the recognition phase is validated on PH2, ISBI 2016, and ISBI 2017 datasets. From the results, it is concluded that the proposed method outperforms several existing methods and attained accuracy 98.4% on PH2 dataset, 95.1% on ISBI dataset and 94.8% on ISBI 2017 dataset.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10916-019-1413-3DOI Listing
July 2019

Brain tumor detection using statistical and machine learning method.

Comput Methods Programs Biomed 2019 Aug 17;177:69-79. Epub 2019 May 17.

College of EME, NUST, Islamabad, Pakistan.

Background And Objective: Brain tumor occurs because of anomalous development of cells. It is one of the major reasons of death in adults around the globe. Millions of deaths can be prevented through early detection of brain tumor. Earlier brain tumor detection using Magnetic Resonance Imaging (MRI) may increase patient's survival rate. In MRI, tumor is shown more clearly that helps in the process of further treatment. This work aims to detect tumor at an early phase.

Methods: In this manuscript, Weiner filter with different wavelet bands is used to de-noise and enhance the input slices. Subsets of tumor pixels are found with Potential Field (PF) clustering. Furthermore, global threshold and different mathematical morphology operations are used to isolate the tumor region in Fluid Attenuated Inversion Recovery (Flair) and T2 MRI. For accurate classification, Local Binary Pattern (LBP) and Gabor Wavelet Transform (GWT) features are fused.

Results: The proposed approach is evaluated in terms of peak signal to noise ratio (PSNR), mean squared error (MSE) and structured similarity index (SSIM) yielding results as 76.38, 0.037 and 0.98 on T2 and 76.2, 0.039 and 0.98 on Flair respectively. The segmentation results have been evaluated based on pixels, individual features and fused features. At pixels level, the comparison of proposed approach is done with ground truth slices and also validated in terms of foreground (FG) pixels, background (BG) pixels, error region (ER) and pixel quality (Q). The approach achieved 0.93 FG and 0.98 BG precision and 0.010 ER on a local dataset. On multimodal brain tumor segmentation challenge dataset BRATS 2013, 0.93 FG and 0.99 BG precision and 0.005 ER are acquired. Similarly on BRATS 2015, 0.97 FG and 0.98 BG precision and 0.015 ER are obtained. In terms of quality, the average Q value and deviation are 0.88 and 0.017. At the fused feature based level, specificity, sensitivity, accuracy, area under the curve (AUC) and dice similarity coefficient (DSC) are 1.00, 0.92, 0.93, 0.96 and 0.96 on BRATS 2013, 0.90, 1.00, 0.97, 0.98 and 0.98 on BRATS 2015 and 0.90, 0.91, 0.90, 0.77 and 0.95 on local dataset respectively.

Conclusion: The presented approach outperformed as compared to existing approaches.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cmpb.2019.05.015DOI Listing
August 2019

Automated lung nodule detection and classification based on multiple classifiers voting.

Authors:
Tanzila Saba

Microsc Res Tech 2019 Sep 26;82(9):1601-1609. Epub 2019 Jun 26.

College of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia.

Lung cancer is the most common cause of cancer-related death globally. Currently, lung nodule detection and classification are performed by radiologist-assisted computer-aided diagnosis systems. However, emerged artificially intelligent techniques such as neural network, support vector machine, and HMM have improved the detection and classification process of cancer in any part of the human body. Such automated methods and their possible combinations could be used to assist radiologists at early detection of lung nodules that could reduce treatment cost, death rate. Literature reveals that classification based on voting of classifiers exhibited better performance in the detection and classification process. Accordingly, this article presents an automated approach for lung nodule detection and classification that consists of multiple steps including lesion enhancement, segmentation, and features extraction from each candidate's lesion. Moreover, multiple classifiers logistic regression, multilayer perceptron, and voted perceptron are tested for the lung nodule classification using k-fold cross-validation process. The proposed approach is evaluated on the publically available Lung Image Database Consortium benchmark data set. Based on the performance evaluation, it is observed that the proposed method performed better in the stateof the art and achieved an overall accuracy rate of 100%.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23326DOI Listing
September 2019

A comprehensive study of mobile-health based assistive technology for the healthcare of dementia and Alzheimer's disease (AD).

Health Care Manag Sci 2020 Jun 20;23(2):287-309. Epub 2019 Jun 20.

Information Systems Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.

Assistive technology (AT) involvement in therapeutic treatment has provided simple and efficient healthcare solutions to people. Within a short span of time, mobile health (mHealth) has grown rapidly for assisting people living with a chronic disorder. This research paper presents the comprehensive study to identify and review existing mHealth dementia applications (apps), and also synthesize the evidence of using these applications in assisting people with dementia including Alzheimer's disease (AD) and their caregivers. Six electronic databases searched with the purpose of finding literature-based evidence. The search yielded 2818 research articles, with 29 meeting quantified inclusion and exclusion criteria. Six groups and their associated sub-groups emerged from the literature. The main groups are (1) activities of daily living (ADL) based cognitive training, (2) monitoring, (3) dementia screening, (4) reminiscence and socialization, (5) tracking, and (6) caregiver support. Moreover, two commercial mobile application stores i.e., Apple App Store (iOS) and Google Play Store (Android) explored with the intention of identifying the advantages and disadvantages of existing commercially available dementia and AD healthcare apps. From 678 apps, a total of 38 mobile apps qualified as per defined exclusion and inclusion criteria. The shortlisted commercial apps generally targeted different aspects of dementia as identified in research articles. This comprehensive study determined the feasibility of using mobile Health based applications for dementia including AD individuals and their caregivers regardless of limited available research, and these apps have capability to incorporate a variety of strategies and resources to dementia community care.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10729-019-09486-0DOI Listing
June 2020

Intelligent microscopic approach for identification and recognition of citrus deformities.

Microsc Res Tech 2019 Sep 18;82(9):1542-1556. Epub 2019 Jun 18.

Department of Computer Science and Engineering, HITEC University, Taxila, Pakistan.

Plant diseases are accountable for economic losses in an agricultural country. The manual process of plant diseases diagnosis is a key challenge from last one decade; therefore, researchers in this area introduced automated systems. In this research work, automated system is proposed for citrus fruit diseases recognition using computer vision technique. The proposed method incorporates five fundamental steps such as preprocessing, disease segmentation, feature extraction and reduction, fusion, and classification. The noise is being removed followed by a contrast stretching procedure in the very first phase. Later, watershed method is applied to excerpt the infectious regions. The shape, texture, and color features are subsequently computed from these infection regions. In the fourth step, reduced features are fused using serial-based approach followed by a final step of classification using multiclass support vector machine. For dimensionality reduction, principal component analysis is utilized, which is a statistical procedure that enforces an orthogonal transformation on a set of observations. Three different image data sets (Citrus Image Gallery, Plant Village, and self-collected) are combined in this research to achieving a classification accuracy of 95.5%. From the stats, it is quite clear that our proposed method outperforms several existing methods with greater precision and accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23320DOI Listing
September 2019

Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation.

Microsc Res Tech 2019 Aug 29;82(8):1302-1315. Epub 2019 Apr 29.

School of Computer and Technology, Anhui University, Hefei, China.

Automatic and precise segmentation and classification of tumor area in medical images is still a challenging task in medical research. Most of the conventional neural network based models usefully connected or convolutional neural networks to perform segmentation and classification. In this research, we present deep learning models using long short term memory (LSTM) and convolutional neural networks (ConvNet) for accurate brain tumor delineation from benchmark medical images. The two different models, that is, ConvNet and LSTM networks are trained using the same data set and combined to form an ensemble to improve the results. We used publicly available MICCAI BRATS 2015 brain cancer data set consisting of MRI images of four modalities T1, T2, T1c, and FLAIR. To enhance the quality of input images, multiple combinations of preprocessing methods such as noise removal, histogram equalization, and edge enhancement are formulated and best performer combination is applied. To cope with the class imbalance problem, class weighting is used in proposed models. The trained models are tested on validation data set taken from the same image set and results obtained from each model are reported. The individual score (accuracy) of ConvNet is found 75% whereas for LSTM based network produced 80% and ensemble fusion produced 82.29% accuracy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23281DOI Listing
August 2019

Mobile-Health Applications for the Efficient Delivery of Health Care Facility to People with Dementia (PwD) and Support to Their Carers: A Survey.

Biomed Res Int 2019 27;2019:7151475. Epub 2019 Mar 27.

Department of Computer Engineering, Umm Al-Qura University, Makkah 21421, Saudi Arabia.

Dementia directly influences the quality of life of a person suffering from this chronic illness. The caregivers or carers of dementia people provide critical support to them but are subject to negative health outcomes because of burden and stress. The intervention of mobile health (mHealth) has become a fast-growing assistive technology (AT) in therapeutic treatment of individuals with chronic illness. The purpose of this comprehensive study is to identify, appraise, and synthesize the existing evidence on the use of mHealth applications (apps) as a healthcare resource for people with dementia and their caregivers. A review of both peer-reviewed and full-text literature was undertaken across five (05) electronic databases for checking the articles published during the last five years (between 2014 and 2018). Out of 6195 searches yielded articles, 17 were quantified according to inclusion and exclusion criteria. The included studies distinguish between five categories, viz., (1) cognitive training and daily living, (2) screening, (3) health and safety monitoring, (4) leisure and socialization, and (5) navigation. Furthermore, two most popular commercial app stores, i.e., Google Play Store and Apple App Store, were searched for finding mHealth based dementia apps for PwD and their caregivers. Initial search generated 356 apps with thirty-five (35) meeting the defined inclusion and exclusion criteria. After shortlisting of mobile applications, it is observed that these existing apps generally addressed different dementia specific aspects overlying with the identified categories in research articles. The comprehensive study concluded that mobile health apps appear as feasible AT intervention for PwD and their carers irrespective of limited available research, but these apps have potential to provide different resources and strategies to help this community.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1155/2019/7151475DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6457307PMC
August 2019

Lungs nodule detection framework from computed tomography images using support vector machine.

Microsc Res Tech 2019 Aug 11;82(8):1256-1266. Epub 2019 Apr 11.

Department of EE, COMSATS University Islamabad, Wah Campus, Islamabad, Pakistan.

The emergence of cloud infrastructure has the potential to provide significant benefits in a variety of areas in the medical imaging field. The driving force behind the extensive use of cloud infrastructure for medical image processing is the exponential increase in the size of computed tomography (CT) and magnetic resonance imaging (MRI) data. The size of a single CT/MRI image has increased manifold since the inception of these imagery techniques. This demand for the introduction of effective and efficient frameworks for extracting relevant and most suitable information (features) from these sizeable images. As early detection of lungs cancer can significantly increase the chances of survival of a lung scanner patient, an effective and efficient nodule detection system can play a vital role. In this article, we have proposed a novel classification framework for lungs nodule classification with less false positive rates (FPRs), high accuracy, sensitivity rate, less computationally expensive and uses a small set of features while preserving edge and texture information. The proposed framework comprises multiple phases that include image contrast enhancement, segmentation, feature extraction, followed by an employment of these features for training and testing of a selected classifier. Image preprocessing and feature selection being the primary steps-playing their vital role in achieving improved classification accuracy. We have empirically tested the efficacy of our technique by utilizing the well-known Lungs Image Consortium Database dataset. The results prove that the technique is highly effective for reducing FPRs with an impressive sensitivity rate of 97.45%.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jemt.23275DOI Listing
August 2019

Fault Detection in Wireless Sensor Networks through the Random Forest Classifier.

Sensors (Basel) 2019 Apr 1;19(7). Epub 2019 Apr 1.

College of Science, Zagazig University, Zagazig 44511, Egypt.

Wireless Sensor Networks (WSNs) are vulnerable to faults because of their deployment in unpredictable and hazardous environments. This makes WSN prone to failures such as software, hardware, and communication failures. Due to the sensor's limited resources and diverse deployment fields, fault detection in WSNs has become a daunting task. To solve this problem, Support Vector Machine (SVM), Convolutional Neural Network (CNN), Stochastic Gradient Descent (SGD), Multilayer Perceptron (MLP), Random Forest (RF), and Probabilistic Neural Network (PNN) classifiers are used for classification of gain, offset, spike, data loss, out of bounds, and stuck-at faults at the sensor level. Out of six faults, two of them are induced in the datasets, i.e., spike and data loss faults. The results are compared on the basis of their Detection Accuracy (DA), True Positive Rate (TPR), Matthews Correlation Coefficients (MCC), and F1-score. In this paper, a comparative analysis is performed among the classifiers mentioned previously on real-world datasets. Simulations show that the RF algorithm secures a better fault detection rate than the rest of the classifiers.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/s19071568DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6480196PMC
April 2019
-->