Publications by authors named "Ana Maria Cretu"

8 Publications

  • Page 1 of 1

HPV and HIV Coinfection in Women from a Southeast Region of Romania-PICOPIV Study.

Medicina (Kaunas) 2022 Jun 3;58(6). Epub 2022 Jun 3.

Center for Research and Development of the Morphological and Genetic Studies of Malignant Pathology, Ovidius University of Constanta, 145 Tomis Blvd., 900591 Constanta, Romania.

: Romania faces one of the highest cervical cancer burdens in Europe though it is a preventable cancer through population screening by cytology and human papillomavirus (HPV) detection. Also, it has one of the highest incidences of human immunodeficiency virus (HIV) infection. HPV and HIV coinfection are frequently encountered. The aim of study was to establish the prevalence of HPV infection among HIV-positive women in Southeast Region of Romania, to genotype high risk HPV types -and to correlate the results with clinical data and cytological cervical lesions. : 40 HIV-positive women were screened for HPV types and for cytological cervical lesions. The findings were evaluated in correlation with CD4 cell counts, HIV viral load, age at first sexual intercourse, number of sexual partners, vaginal candidiasis, and Gardnerella using statistical methods. : 19/40 (47.5%) women were positive for HPV types, 63.15% infected with single HPV type and 36.85% with multiple HPV types. The most frequent types were type: 31 (42.1%), 56 (31.57%), 53 (15.78%). On cytology, 34 (85%) women were found with NILM of which 38.23% were HPV-positive. Fifteen percent of women had abnormal cytology (three ASC-US, three LSIL), and all of them were HPV-positive. Through analyzing the value of CD4 count, women with CD4 count ≤ 200 cells/μL were found to be significantly more likely to be infected with HPV; meanwhile there was no correlation between the detection of HPV types and HIV viral load. Candida or Gardnerella were more often associated with HIV-positive women with HPV, than in women without HPV. Infection with HPV types is common among HIV-positive women in the Southeast Region of Romania and it is associated with age at the beginning of sexual life, number of sexual partners, CD4 value, vaginal candidiasis, and Gardnerella infection.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/medicina58060760DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9231193PMC
June 2022

Interaction data are identifiable even across long periods of time.

Nat Commun 2022 01 25;13(1):313. Epub 2022 Jan 25.

Department of Computing, Imperial College London, London, SW7 2AZ, UK.

Fine-grained records of people's interactions, both offline and online, are collected at large scale. These data contain sensitive information about whom we meet, talk to, and when. We demonstrate here how people's interaction behavior is stable over long periods of time and can be used to identify individuals in anonymous datasets. Our attack learns the profile of an individual using geometric deep learning and triplet loss optimization. In a mobile phone metadata dataset of more than 40k people, it correctly identifies 52% of individuals based on their 2-hop interaction graph. We further show that the profiles learned by our method are stable over time and that 24% of people are still identifiable after 20 weeks. Our results suggest that people with well-balanced interaction graphs are more identifiable. Applying our attack to Bluetooth close-proximity networks, we show that even 1-hop interaction graphs are enough to identify people more than 26% of the time. Our results provide strong evidence that disconnected and even re-pseudonymized interaction data can be linked together making them personal data under the European Union's General Data Protection Regulation.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41467-021-27714-6DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8789822PMC
January 2022

Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition.

Sensors (Basel) 2020 Dec 27;21(1). Epub 2020 Dec 27.

Department of Systems and Computer Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada.

Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/s21010113DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7795850PMC
December 2020

An Application of Deep Learning to Tactile Data for Object Recognition under Visual Guidance.

Sensors (Basel) 2019 Mar 29;19(7). Epub 2019 Mar 29.

Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada.

Drawing inspiration from haptic exploration of objects by humans, the current work proposes a novel framework for robotic tactile object recognition, where visual information in the form of a set of visually interesting points is employed to guide the process of tactile data acquisition. Neuroscience research confirms the integration of cutaneous data as a response to surface changes sensed by humans with data from joints, muscles, and bones (kinesthetic cues) for object recognition. On the other hand, psychological studies demonstrate that humans tend to follow object contours to perceive their global shape, which leads to object recognition. In compliance with these findings, a series of contours are determined around a set of 24 virtual objects from which bimodal tactile data (kinesthetic and cutaneous) are obtained sequentially and by adaptively changing the size of the sensor surface according to the object geometry for each object. A virtual Force Sensing Resistor array (FSR) is employed to capture cutaneous cues. Two different methods for sequential data classification are then implemented using Convolutional Neural Networks (CNN) and conventional classifiers, including support vector machines and k-nearest neighbors. In the case of conventional classifiers, we exploit contourlet transformation to extract features from tactile images. In the case of CNN, two networks are trained for cutaneous and kinesthetic data and a novel hybrid decision-making strategy is proposed for object recognition. The proposed framework is tested both for contours determined blindly (randomly determined contours of objects) and contours determined using a model of visual attention. Trained classifiers are tested on 4560 new sequential tactile data and the CNN trained over tactile data from object contours selected by the model of visual attention yields an accuracy of 98.97% which is the highest accuracy among other implemented approaches.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/s19071534DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6480322PMC
March 2019

Wearable Sensor Data Classification for Human Activity Recognition Based on an Iterative Learning Framework.

Sensors (Basel) 2017 Jun 7;17(6). Epub 2017 Jun 7.

Department of Computer Science and Engineering, Université du Québec en Outaouais, Gatineau, QC J8Y 3G5, Canada.

The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/s17061287DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5492798PMC
June 2017

Multimodal Bio-Inspired Tactile Sensing Module for Surface Characterization.

Sensors (Basel) 2017 May 23;17(6). Epub 2017 May 23.

School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON K1N 6N5, Canada.

Robots are expected to recognize the properties of objects in order to handle them safely and efficiently in a variety of applications, such as health and elder care, manufacturing, or high-risk environments. This paper explores the issue of surface characterization by monitoring the signals acquired by a novel bio-inspired tactile probe in contact with ridged surfaces. The tactile module comprises a nine Degree of Freedom Microelectromechanical Magnetic, Angular Rate, and Gravity system (9-DOF MEMS MARG) and a deep MEMS pressure sensor embedded in a compliant structure that mimics the function and the organization of mechanoreceptors in human skin as well as the hardness of the human skin. When the modules tip slides over a surface, the MARG unit vibrates and the deep pressure sensor captures the overall normal force exerted. The module is evaluated in two experiments. The first experiment compares the frequency content of the data collected in two setups: one when the module is mounted over a linear motion carriage that slides four grating patterns at constant velocities; the second when the module is carried by a robotic finger in contact with the same grating patterns while performing a sliding motion, similar to the exploratory motion employed by humans to detect object roughness. As expected, in the linear setup, the magnitude spectrum of the sensors' output shows that the module can detect the applied stimuli with frequencies ranging from 3.66 Hz to 11.54 Hz with an overall maximum error of ±0.1 Hz. The second experiment shows how localized features extracted from the data collected by the robotic finger setup over seven synthetic shapes can be used to classify them. The classification method consists on applying multiscale principal components analysis prior to the classification with a multilayer neural network. Achieved accuracies from 85.1% to 98.9% for the various sensor types demonstrate the usefulness of traditional MEMS as tactile sensors embedded into flexible substrates.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/s17061187DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5490693PMC
May 2017

Acquisition and Neural Network Prediction of 3D Deformable Object Shape Using a Kinect and a Force-Torque Sensor.

Sensors (Basel) 2017 May 11;17(5). Epub 2017 May 11.

Department of Computer Science and Engineering, Université du Québec en Outaouais, Gatineau, J8X 3X7 QC, Canada.

The realistic representation of deformations is still an active area of research, especially for deformable objects whose behavior cannot be simply described in terms of elasticity parameters. This paper proposes a data-driven neural-network-based approach for capturing implicitly and predicting the deformations of an object subject to external forces. Visual data, in the form of 3D point clouds gathered by a Kinect sensor, is collected over an object while forces are exerted by means of the probing tip of a force-torque sensor. A novel approach based on neural gas fitting is proposed to describe the particularities of a deformation over the selectively simplified 3D surface of the object, without requiring knowledge of the object material. An alignment procedure, a distance-based clustering, and inspiration from stratified sampling support this process. The resulting representation is denser in the region of the deformation (an average of 96.6% perceptual similarity with the collected data in the deformed area), while still preserving the object's overall shape (86% similarity over the entire surface) and only using on average of 40% of the number of vertices in the mesh. A series of feedforward neural networks is then trained to predict the mapping between the force parameters characterizing the interaction with the object and the change in the object shape, as captured by the fitted neural gas nodes. This series of networks allows for the prediction of the deformation of an object when subject to unknown interactions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.3390/s17051083DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5470473PMC
May 2017

Soft object deformation monitoring and learning for model-based robotic hand manipulation.

IEEE Trans Syst Man Cybern B Cybern 2012 Jun 27;42(3):740-53. Epub 2011 Dec 27.

School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, Canada.

This paper discusses the design and implementation of a framework that automatically extracts and monitors the shape deformations of soft objects from a video sequence and maps them with force measurements with the goal of providing the necessary information to the controller of a robotic hand to ensure safe model-based deformable object manipulation. Measurements corresponding to the interaction force at the level of the fingertips and to the position of the fingertips of a three-finger robotic hand are associated with the contours of a deformed object tracked in a series of images using neural-network approaches. The resulting model captures the behavior of the object and is able to predict its behavior for previously unseen interactions without any assumption on the object's material. The availability of such models can contribute to the improvement of a robotic hand controller, therefore allowing more accurate and stable grasp while providing more elaborate manipulation capabilities for deformable objects. Experiments performed for different objects, made of various materials, reveal that the method accurately captures and predicts the object's shape deformation while the object is submitted to external forces applied by the robot fingers. The proposed method is also fast and insensitive to severe contour deformations, as well as to smooth changes in lighting, contrast, and background.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/TSMCB.2011.2176115DOI Listing
June 2012
-->