Publications by authors named "Aydogan Ozcan"

205 Publications

Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue.

Light Sci Appl 2020 May 6;9(1):78. Epub 2020 May 6.

Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.

Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time consuming, labour intensive, expensive and destructive to the specimen. Recently, the ability to virtually stain unlabelled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain-specific deep neural networks. Here, we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images, in which different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information as its input: (1) autofluorescence images of the label-free tissue sample and (2) a "digital staining matrix", which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin (H&E), Jones' silver stain, and Masson's trichrome stain. Using a single network, this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section, which is currently not feasible with standard histochemical staining methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41377-020-0315-yDOI Listing
May 2020

Spectrally encoded single-pixel machine vision using diffractive networks.

Sci Adv 2021 Mar 26;7(13). Epub 2021 Mar 26.

Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA.

We demonstrate optical networks composed of diffractive layers trained using deep learning to encode the spatial information of objects into the power spectrum of the diffracted light, which are used to classify objects with a single-pixel spectroscopic detector. Using a plasmonic nanoantenna-based detector, we experimentally validated this single-pixel machine vision framework at terahertz spectrum to optically classify the images of handwritten digits by detecting the spectral power of the diffracted light at ten distinct wavelengths, each representing one class/digit. We also coupled this diffractive network-based spectral encoding with a shallow electronic neural network, which was trained to rapidly reconstruct the images of handwritten digits based on solely the spectral power detected at these ten distinct wavelengths, demonstrating task-specific image decompression. This single-pixel machine vision framework can also be extended to other spectral-domain measurement systems to enable new 3D imaging and sensing modalities integrated with diffractive network-based spectral encoding of information.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1126/sciadv.abd7690DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7997518PMC
March 2021

Recurrent neural network-based volumetric fluorescence microscopy.

Light Sci Appl 2021 Mar 23;10(1):62. Epub 2021 Mar 23.

Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.

Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors. We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input, matching confocal microscopy images of the same sample volume. Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework, overcoming the limitations of current 3D scanning microscopy tools.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41377-021-00506-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7985192PMC
March 2021

Addressable nanoantennas with cleared hotspots for single-molecule detection on a portable smartphone microscope.

Nat Commun 2021 02 11;12(1):950. Epub 2021 Feb 11.

Department of Chemistry and Center for NanoScience, Ludwig-Maximilians-Universität München, München, Germany.

The advent of highly sensitive photodetectors and the development of photostabilization strategies made detecting the fluorescence of single molecules a routine task in many labs around the world. However, to this day, this process requires cost-intensive optical instruments due to the truly nanoscopic signal of a single emitter. Simplifying single-molecule detection would enable many exciting applications, e.g., in point-of-care diagnostic settings, where costly equipment would be prohibitive. Here, we introduce addressable NanoAntennas with Cleared HOtSpots (NACHOS) that are scaffolded by DNA origami nanostructures and can be specifically tailored for the incorporation of bioassays. Single emitters placed in NACHOS emit up to 461-fold (average of 89 ± 7-fold) brighter enabling their detection with a customary smartphone camera and an 8-US-dollar objective lens. To prove the applicability of our system, we built a portable, battery-powered smartphone microscope and successfully carried out an exemplary single-molecule detection assay for DNA specific to antibiotic-resistant Klebsiella pneumonia on the road.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41467-021-21238-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7878865PMC
February 2021

Publisher Correction: Ensemble learning of diffractive optical networks.

Light Sci Appl 2021 Feb 7;10(1):34. Epub 2021 Feb 7.

Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41377-021-00473-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7868352PMC
February 2021

COVID-19 biosensing technologies.

Biosens Bioelectron 2021 04 28;178:113046. Epub 2021 Jan 28.

Electrical & Computer Engineering and Bioengineering Departments, UCLA, Los Angeles, CA, 90095, USA. Electronic address:

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.bios.2021.113046DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7843064PMC
April 2021

Neural Network-Based On-Chip Spectroscopy Using a Scalable Plasmonic Encoder.

ACS Nano 2021 04 5;15(4):6305-6315. Epub 2021 Feb 5.

Department of Electrical and Computer Engineering, University of California, Los Angeles, California 90095, United States.

Conventional spectrometers are limited by trade-offs set by size, cost, signal-to-noise ratio (SNR), and spectral resolution. Here, we demonstrate a deep learning-based spectral reconstruction framework using a compact and low-cost on-chip sensing scheme that is not constrained by many of the design trade-offs inherent to grating-based spectroscopy. The system employs a plasmonic spectral encoder chip containing 252 different tiles of nanohole arrays fabricated using a scalable and low-cost imprint lithography method, where each tile has a specific geometry and thus a specific optical transmission spectrum. The illumination spectrum of interest directly impinges upon the plasmonic encoder, and a CMOS image sensor captures the transmitted light without any lenses, gratings, or other optical components in between, making the entire hardware highly compact, lightweight, and field-portable. A trained neural network then reconstructs the unknown spectrum using the transmitted intensity information from the spectral encoder in a feed-forward and noniterative manner. Benefiting from the parallelization of neural networks, the average inference time per spectrum is ∼28 μs, which is much faster compared to other computational spectroscopy approaches. When blindly tested on 14 648 unseen spectra with varying complexity, our deep-learning based system identified 96.86% of the spectral peaks with an average peak localization error, bandwidth error, and height error of 0.19 nm, 0.18 nm, and 7.60%, respectively. This system is also highly tolerant to fabrication defects that may arise during the imprint lithography process, which further makes it ideal for applications that demand cost-effective, field-portable, and sensitive high-resolution spectroscopy tools.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1021/acsnano.1c00079DOI Listing
April 2021

All-optical information-processing capacity of diffractive surfaces.

Light Sci Appl 2021 Jan 28;10(1):25. Epub 2021 Jan 28.

Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.

The precise engineering of materials and surfaces has been at the heart of some of the recent advances in optics and photonics. These advances related to the engineering of materials with new functionalities have also opened up exciting avenues for designing trainable surfaces that can perform computation and machine-learning tasks through light-matter interactions and diffraction. Here, we analyze the information-processing capacity of coherent optical networks formed by diffractive surfaces that are trained to perform an all-optical computational task between a given input and output field-of-view. We show that the dimensionality of the all-optical solution space covering the complex-valued transformations between the input and output fields-of-view is linearly proportional to the number of diffractive surfaces within the optical network, up to a limit that is dictated by the extent of the input and output fields-of-view. Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher-dimensional subspace of the complex-valued linear transformations between a larger input field-of-view and a larger output field-of-view and exhibit depth advantages in terms of their statistical inference, learning, and generalization capabilities for different image classification tasks when compared with a single trainable diffractive surface. These analyses and conclusions are broadly applicable to various forms of diffractive surfaces, including, e.g., plasmonic and/or dielectric-based metasurfaces and flat optics, which can be used to form all-optical processors.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41377-020-00439-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7844294PMC
January 2021

Ensemble learning of diffractive optical networks.

Light Sci Appl 2021 Jan 11;10(1):14. Epub 2021 Jan 11.

Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.

A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning. Specifically, there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization, power efficiency and computation speed. Diffractive deep neural networks (DNNs) form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers. DNNs have demonstrated success in various tasks, including object classification, the spectral encoding of information, optical pulse shaping and imaging. Here, we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning. After independently training 1252 DNNs that were diversely engineered with a variety of passive input filters, we applied a pruning algorithm to select an optimized ensemble of DNNs that collectively improved the image classification accuracy. Through this pruning, we numerically demonstrated that ensembles of N = 14 and N = 30 DNNs achieve blind testing accuracies of 61.14 ± 0.23% and 62.13 ± 0.05%, respectively, on the classification of CIFAR-10 test images, providing an inference improvement of >16% compared to the average performance of the individual DNNs within each ensemble. These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41377-020-00446-wDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7801728PMC
January 2021

Terahertz pulse shaping using diffractive surfaces.

Nat Commun 2021 Jan 4;12(1):37. Epub 2021 Jan 4.

Department of Electrical and Computer Engineering, University of California Los Angeles (UCLA), Los Angeles, CA, 90095, USA.

Recent advances in deep learning have been providing non-intuitive solutions to various inverse problems in optics. At the intersection of machine learning and optics, diffractive networks merge wave-optics with deep learning to design task-specific elements to all-optically perform various tasks such as object classification and machine vision. Here, we present a diffractive network, which is used to shape an arbitrary broadband pulse into a desired optical waveform, forming a compact and passive pulse engineering system. We demonstrate the synthesis of various different pulses by designing diffractive layers that collectively engineer the temporal waveform of an input terahertz pulse. Our results demonstrate direct pulse shaping in terahertz spectrum, where the amplitude and phase of the input wavelengths are independently controlled through a passive diffractive device, without the need for an external pump. Furthermore, a physical transfer learning approach is presented to illustrate pulse-width tunability by replacing part of an existing network with newly trained diffractive layers, demonstrating its modularity. This learning-based diffractive pulse engineering framework can find broad applications in e.g., communications, ultra-fast imaging and spectroscopy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41467-020-20268-zDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7782497PMC
January 2021

Inference in artificial intelligence with deep optics and photonics.

Nature 2020 12 2;588(7836):39-47. Epub 2020 Dec 2.

École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.

Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41586-020-2973-6DOI Listing
December 2020

Analysis of Diffractive Optical Neural Networks and Their Integration with Electronic Neural Networks.

IEEE J Sel Top Quantum Electron 2020 Jan-Feb;26(1). Epub 2019 Jun 6.

Electrical and Computer Engineering Department, Bioengineering Department, University of California, Los Angeles, CA 90095 USA, and also with the California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA.

Optical machine learning offers advantages in terms of power efficiency, scalability and computation speed. Recently, an optical machine learning method based on Diffractive Deep Neural Networks (DNNs) has been introduced to execute a function as the input light diffracts through passive layers, designed by deep learning using a computer. Here we introduce improvements to DNNs by changing the training loss function and reducing the impact of vanishing gradients in the error back-propagation step. Using five phase-only diffractive layers, we numerically achieved a classification accuracy of 97.18% and 89.13% for optical recognition of handwritten digits and fashion products, respectively; using both phase and amplitude modulation (complex-valued) at each layer, our inference performance improved to 97.81% and 89.32%, respectively. Furthermore, we report the integration of DNNs with electronic neural networks to create hybrid-classifiers that significantly reduce the number of input pixels into an electronic network using an ultra-compact front-end DNN with a layer-to-layer distance of a few wavelengths, also reducing the complexity of the successive electronic network. Using a 5-layer phase-only DNN jointly-optimized with a single fully-connected electronic layer, we achieved a classification accuracy of 98.71% and 90.04% for the recognition of handwritten digits and fashion products, respectively. Moreover, the input to the electronic network was compressed by >7.8 times down to 10×10 pixels. Beyond creating low-power and high-frame rate machine learning platforms, DNN-based hybrid neural networks will find applications in smart optical imager and sensor design.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/JSTQE.2019.2921376DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7677864PMC
June 2019

Label-free detection of cysts using a deep learning-enabled portable imaging flow cytometer.

Lab Chip 2020 11;20(23):4404-4412

Electrical and Computer Engineering Department, University of California, Los Angeles, 420 Westwood Plaza, Engineering IV. 68-119, Los Angeles, CA 90095, USA.

We report a field-portable and cost-effective imaging flow cytometer that uses deep learning and holography to accurately detect Giardia lamblia cysts in water samples at a volumetric throughput of 100 mL h-1. This flow cytometer uses lens free color holographic imaging to capture and reconstruct phase and intensity images of microscopic objects in a continuously flowing sample, and automatically identifies Giardia lamblia cysts in real-time without the use of any labels or fluorophores. The imaging flow cytometer is housed in an environmentally-sealed enclosure with dimensions of 19 cm × 19 cm × 16 cm and weighs 1.6 kg. We demonstrate that this portable imaging flow cytometer coupled to a laptop computer can detect and quantify, in real-time, low levels of Giardia contamination (e.g., <10 cysts per 50 mL) in both freshwater and seawater samples. The field-portable and label-free nature of this method has the potential to allow rapid and automated screening of drinking water supplies in resource limited settings in order to detect waterborne parasites and monitor the integrity of the filters used for water treatment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1039/d0lc00708kDOI Listing
November 2020

Sensing of electrolytes in urine using a miniaturized paper-based device.

Sci Rep 2020 08 12;10(1):13620. Epub 2020 Aug 12.

Department of Mechanical Engineering, Koc University, Sariyer, Istanbul, 34450, Turkey.

Analyzing electrolytes in urine, such as sodium, potassium, calcium, chloride, and nitrite, has significant diagnostic value in detecting various conditions, such as kidney disorder, urinary stone disease, urinary tract infection, and cystic fibrosis. Ideally, by regularly monitoring these ions with the convenience of dipsticks and portable tools, such as cellphones, informed decision making is possible to control the consumption of these ions. Here, we report a paper-based sensor for measuring the concentration of sodium, potassium, calcium, chloride, and nitrite in urine, accurately quantified using a smartphone-enabled platform. By testing the device with both Tris buffer and artificial urine containing a wide range of electrolyte concentrations, we demonstrate that the proposed device can be used for detecting potassium, calcium, chloride, and nitrite within the whole physiological range of concentrations, and for binary quantification of sodium concentration.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-70456-6DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7423618PMC
August 2020

Early detection and classification of live bacteria using time-lapse coherent imaging and deep learning.

Light Sci Appl 2020 10;9:118. Epub 2020 Jul 10.

Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA.

Early identification of pathogenic bacteria in food, water, and bodily fluids is very important and yet challenging, owing to sample complexities and large sample volumes that need to be rapidly screened. Existing screening methods based on plate counting or molecular analysis present various tradeoffs with regard to the detection time, accuracy/sensitivity, cost, and sample preparation complexity. Here, we present a computational live bacteria detection system that periodically captures coherent microscopy images of bacterial growth inside a 60-mm-diameter agar plate and analyses these time-lapsed holograms using deep neural networks for the rapid detection of bacterial growth and the classification of the corresponding species. The performance of our system was demonstrated by the rapid detection of and total coliform bacteria (i.e., and subsp) in water samples, shortening the detection time by >12 h compared to the Environmental Protection Agency (EPA)-approved methods. Using the preincubation of samples in growth media, our system achieved a limit of detection (LOD) of ~1 colony forming unit (CFU)/L in ≤9 h of total test time. This platform is highly cost-effective (~$0.6/test) and has high-throughput with a scanning speed of 24 cm/min over the entire plate surface, making it highly suitable for integration with the existing methods currently used for bacteria detection on agar plates. Powered by deep learning, this automated and cost-effective live bacteria detection platform can be transformative for a wide range of applications in microbiology by significantly reducing the detection time and automating the identification of colonies without labelling or the need for an expert.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41377-020-00358-9DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7351775PMC
July 2020

Optical Technologies for Improving Healthcare in Low-Resource Settings: introduction to the feature issue.

Biomed Opt Express 2020 Jun 18;11(6):3091-3094. Epub 2020 May 18.

Wellman Center for Photomedicine, Massachusetts General Hospital, 55 Fruit St, Boston, MA 02114, USA.

This feature issue of presents a cross-section of interesting and emerging work of relevance to optical technologies in low-resource settings. In particular, the technologies described here aim to address challenges to meeting healthcare needs in resource-constrained environments, including in rural and underserved areas. This collection of 18 papers includes papers on both optical system design and image analysis, with applications demonstrated for ex vivo and in vivo use. All together, these works portray the importance of global health research to the scientific community and the role that optics can play in addressing some of the world's most pressing healthcare challenges.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/BOE.397698DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7316015PMC
June 2020

Automated screening of sickle cells using a smartphone-based microscope and deep learning.

NPJ Digit Med 2020 22;3:76. Epub 2020 May 22.

Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA.

Sickle cell disease (SCD) is a major public health priority throughout much of the world, affecting millions of people. In many regions, particularly those in resource-limited settings, SCD is not consistently diagnosed. In Africa, where the majority of SCD patients reside, more than 50% of the 0.2-0.3 million children born with SCD each year will die from it; many of these deaths are in fact preventable with correct diagnosis and treatment. Here, we present a deep learning framework which can perform automatic screening of sickle cells in blood smears using a smartphone microscope. This framework uses two distinct, complementary deep neural networks. The first neural network enhances and standardizes the blood smear images captured by the smartphone microscope, spatially and spectrally matching the image quality of a laboratory-grade benchtop microscope. The second network acts on the output of the first image enhancement neural network and is used to perform the semantic segmentation between healthy and sickle cells within a blood smear. These segmented images are then used to rapidly determine the SCD diagnosis per patient. We blindly tested this mobile sickle cell detection method using blood smears from 96 unique patients (including 32 SCD patients) that were imaged by our smartphone microscope, and achieved ~98% accuracy, with an area-under-the-curve of 0.998. With its high accuracy, this mobile and cost-effective method has the potential to be used as a screening tool for SCD and other blood cell disorders in resource-limited settings.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41746-020-0282-yDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7244537PMC
May 2020

Deep learning-enabled point-of-care sensing using multiplexed paper-based sensors.

NPJ Digit Med 2020 7;3:66. Epub 2020 May 7.

1Department of Electrical and Computer Engineering, University of California, Los Angeles, CA USA.

We present a deep learning-based framework to design and quantify point-of-care sensors. As a use-case, we demonstrated a low-cost and rapid paper-based vertical flow assay (VFA) for high sensitivity C-Reactive Protein (hsCRP) testing, commonly used for assessing risk of cardio-vascular disease (CVD). A machine learning-based framework was developed to (1) determine an optimal configuration of immunoreaction spots and conditions, spatially-multiplexed on a sensing membrane, and (2) to accurately infer target analyte concentration. Using a custom-designed handheld VFA reader, a clinical study with 85 human samples showed a competitive coefficient-of-variation of 11.2% and linearity of  = 0.95 among blindly-tested VFAs in the hsCRP range (i.e., 0-10 mg/L). We also demonstrated a mitigation of the hook-effect due to the multiplexed immunoreactions on the sensing membrane. This paper-based computational VFA could expand access to CVD testing, and the presented framework can be broadly used to design cost-effective and mobile point-of-care sensors.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41746-020-0274-yDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206101PMC
May 2020

Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue.

Light Sci Appl 2020 6;9:78. Epub 2020 May 6.

1Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA.

Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time consuming, labour intensive, expensive and destructive to the specimen. Recently, the ability to virtually stain unlabelled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain-specific deep neural networks. Here, we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images, in which different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information as its input: (1) autofluorescence images of the label-free tissue sample and (2) a "digital staining matrix", which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin (H&E), Jones' silver stain, and Masson's trichrome stain. Using a single network, this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section, which is currently not feasible with standard histochemical staining methods.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41377-020-0315-yDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7203145PMC
May 2020

Contact lens-based lysozyme detection in tear using a mobile sensor.

Lab Chip 2020 04 30;20(8):1493-1502. Epub 2020 Mar 30.

Department of Electrical and Computer Engineering, University of California, Los Angeles, USA.

We report a method for sensing analytes in tear-fluid using commercial contact lenses (CLs) as sample collectors for subsequent analysis with a cost-effective and field-portable reader. In this study we quantify lysozyme, the most prevalent protein in tear fluid, non-specifically bound to CLs worn by human participants. Our mobile reader uses time-lapse imaging to capture an increasing fluorescent signal in a standard well-plate, the rate-of-change of which is used to indirectly infer lysozyme concentration through the use of a standard curve. We empirically determined the best-suited CL material for our sampling procedure and assay, and subsequently monitored the lysozyme levels of nine healthy human participants over a two-week period. Of these participants who were regular CL wearers (6 out of 9), we observed an increase in lysozyme levels from 6.89 ± 2.02 μg mL to 10.72 ± 3.22 μg mL (mean ± SD) when inducing an instance of digital eye-strain by asking them to play a game on their mobile-phones during the CL wear-duration. We also observed a lower mean lysozyme concentration (2.43 ± 1.66 μg mL) in a patient cohort with dry eye disease (DED) as compared to the average monitoring level of healthy (no DED) human participants (6.89 ± 2.02 μg mL). Taken together, this study demonstrates tear-fluid analysis with simple and non-invasive sampling steps along with a rapid, easy-to-use, and cost-effective measurement system, ultimately indicating physiological differences in human participants. We believe this method could be used in future tear-fluid studies, even supporting multiplexed detection of a panel of tear biomarkers toward improved diagnostics and prognostics as well as personalized mobile-health applications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1039/c9lc01039dDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7189769PMC
April 2020

Measurement of serum phosphate levels using a mobile sensor.

Analyst 2020 Mar;145(5):1841-1848

Department of Electrical and Computer Engineering, University of California, Los Angeles, CA 90095, USA. and Department of Bioengineering, University of California, Los Angeles, CA 90095, USA and California NanoSystems Institute, University of California, Los Angeles, CA 90095, USA and Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA.

The measurement of serum phosphate concentration is crucial for patients with advanced chronic kidney disease (CKD) and those on maintenance dialysis, as abnormal phosphate levels may be associated with severe health risks. It is important to monitor serum phosphate levels on a regular basis in these patients; however, such measurements are generally limited to every 0.5-3 months, depending on the severity of CKD. This is due to the fact that serum phosphate measurements can only be performed at regular clinic visits, in addition to cost considerations. Here we present a portable and cost-effective point-of-care device capable of measuring serum phosphate levels using a single drop of blood (<60 μl). This is achieved by integrating a paper-based microfluidic platform with a custom-designed smartphone reader. This mobile sensor was tested on patients undergoing dialysis, where whole blood samples were acquired before starting the hemodialysis and during the three-hour treatment. This sampling during the hemodialysis, under patient consent, allowed us to test blood samples with a wide range of phosphate concentrations, and our results showed a strong correlation with the ground truth laboratory tests performed on the same patient samples (Pearson coefficient r = 0.95 and p < 0.001). Our 3D-printed smartphone attachment weighs about 400 g and costs less than 80 USD, whereas the material cost for the disposable test is <3.5 USD (under low volume manufacturing). This low-cost and easy-to-operate system can be used to measure serum phosphate levels at the point-of-care in about 45 min and can potentially be used on a daily basis by patients at home.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1039/c9an02215eDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7058527PMC
March 2020

Holographic detection of nanoparticles using acoustically actuated nanolenses.

Nat Commun 2020 01 16;11(1):171. Epub 2020 Jan 16.

Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.

The optical detection of nanoparticles, including viruses and bacteria, underpins many of the biological, physical and engineering sciences. However, due to their low inherent scattering, detection of these particles remains challenging, requiring complex instrumentation involving extensive sample preparation methods, especially when sensing is performed in liquid media. Here we present an easy-to-use, high-throughput, label-free and cost-effective method for detecting nanoparticles in low volumes of liquids (25 nL) on a disposable chip, using an acoustically actuated lens-free holographic system. By creating an ultrasonic standing wave in the liquid sample, placed on a low-cost glass chip, we cause deformations in a thin liquid layer (850 nm) containing the target nanoparticles (≥140 nm), resulting in the creation of localized lens-like liquid menisci. We also show that the same acoustic waves, used to create the nanolenses, can mitigate against non-specific, adventitious nanoparticle binding, without the need for complex surface chemistries acting as blocking agents.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41467-019-13802-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6965092PMC
January 2020

Fractal LAMP: Label-Free Analysis of Fractal Precipitate for Digital Loop-Mediated Isothermal Nucleic Acid Amplification.

ACS Sens 2020 02 21;5(2):385-394. Epub 2020 Jan 21.

Department of Bioengineering , University of California , Los Angeles , California 90095 , United States.

Nucleic acid amplification assays including loop-mediated isothermal amplification (LAMP) are routinely used in diagnosing diseases and monitoring water and food quality. The results of amplification in these assays are commonly measured with an analog fluorescence readout, which requires specialized optical equipment and can lack quantitative precision. Digital analysis of amplification in small fluid compartments based on exceeding a threshold fluorescence level can enhance the quantitative precision of nucleic acid assays (i.e., digital nucleic acid amplification assays), but still requires specialized optical systems for fluorescence readout and the inclusion of a fluorescent dye. Here, we report Fractal LAMP, an automated method to detect amplified DNA in subnanoliter scale droplets following LAMP in a label-free manner. Our computer vision algorithm achieves high accuracy detecting DNA amplification in droplets by identifying LAMP byproducts that form fractal structures observable in brightfield microscopy. The capabilities of Fractal LAMP are further realized by developing a Bayesian model to estimate DNA concentrations for unknown samples and a bootstrapping method to estimate the number of droplets required to achieve target limits of detection. This digital, label-free assay has the potential to lower reagent and reader cost for nucleic acid measurement while maintaining high quantitative accuracy over 3 orders of magnitude of concentration.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1021/acssensors.9b01974DOI Listing
February 2020

Smartphone-based turbidity reader.

Sci Rep 2019 12 27;9(1):19901. Epub 2019 Dec 27.

Biengineering Department, University of California Los Angeles, Los Angeles, 90095, CA, USA.

Water quality is undergoing significant deterioration due to bacteria, pollutants and other harmful particles, damaging aquatic life and lowering the quality of drinking water. It is, therefore, important to be able to rapidly and accurately measure water quality in a cost-effective manner using e.g., a turbidimeter. Turbidimeters typically use different illumination angles to measure the scattering and transmittance of light through a sample and translate these readings into a measurement based on the standard nephelometric turbidity unit (NTU). Traditional turbidimeters have high sensitivity and specificity, but they are not field-portable and require electricity to operate in field settings. Here we present a field-portable and cost effective turbidimeter that is based on a smartphone. This mobile turbidimeter contains an opto-mechanical attachment coupled to the rear camera of the smartphone, which contains two white light-emitting-diodes to illuminate the water sample, optical fibers to transmit the light collected from the sample to the camera, an external lens for image formation, and diffusers for uniform illumination of the sample. Including the smartphone, this cost-effective device weighs only ~350 g. In our mobile turbidimeter design, we combined two illumination approaches: transmittance, in which the optical fibers were placed directly below the sample cuvette at 180° with respect to the light source, and nephelometry in which the optical fibers were placed on the sides of the sample cuvette at a 90 angle with respect to the to the light source. Images of the end facets of these fiber optic cables were captured using the smart phone and processed using a custom written image processing algorithm to automatically quantify the turbidity of each sample. Using transmittance and nephelometric readings, our mobile turbidimeter achieved accurate measurements over a large dynamic range, from 0.3 NTU to 2000 NTU. The accurate performance of our smartphone-based turbidimeter was also confirmed with various water samples collected in Los Angeles (USA), bacteria spiked water samples, as well as diesel fuel contaminated water samples. Having a detection limit of ~0.3 NTU, this cost-effective smartphone-based turbidimeter can be a useful analytical tool for screening of water quality in resource limited settings.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-019-56474-zDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6934863PMC
December 2019

Point-of-Care Serodiagnostic Test for Early-Stage Lyme Disease Using a Multiplexed Paper-Based Immunoassay and Machine Learning.

ACS Nano 2020 01 18;14(1):229-240. Epub 2019 Dec 18.

Department of Electrical & Computer Engineering , University of California , Los Angeles , California 90025 , United States.

Caused by the tick-borne spirochete , Lyme disease (LD) is the most common vector-borne infectious disease in North America and Europe. Though timely diagnosis and treatment are effective in preventing disease progression, current tests are insensitive in early stage LD, with a sensitivity of <50%. Additionally, the serological testing currently recommended by the U.S. Center for Disease Control has high costs (>$400/test) and extended sample-to-answer timelines (>24 h). To address these challenges, we created a cost-effective and rapid point-of-care (POC) test for early-stage LD that assays for antibodies specific to seven antigens and a synthetic peptide in a paper-based multiplexed vertical flow assay (xVFA). We trained a deep-learning-based diagnostic algorithm to select an optimal subset of antigen/peptide targets and then blindly tested our xVFA using human samples ( = 42, = 54), achieving an area-under-the-curve (AUC), sensitivity, and specificity of 0.950, 90.5%, and 87.0%, respectively, outperforming previous LD POC tests. With batch-specific standardization and threshold tuning, the specificity of our blind-testing performance improved to 96.3%, with an AUC and sensitivity of 0.963 and 85.7%, respectively.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1021/acsnano.9b08151DOI Listing
January 2020

Design of task-specific optical systems using broadband diffractive neural networks.

Light Sci Appl 2019 2;8:112. Epub 2019 Dec 2.

1Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA.

Deep learning has been transformative in many fields, motivating the emergence of various optical computing architectures. Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deep-learning methods to design optical neural networks. Diffraction-based all-optical object recognition systems, designed through this framework and fabricated by 3D printing, have been reported to recognize hand-written digits and fashion products, demonstrating all-optical inference and generalization to sub-classes of data. These previous diffractive approaches employed monochromatic coherent light as the illumination source. Here, we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to all-optically perform a specific task learned using deep learning. We experimentally validated the success of this broadband diffractive neural network architecture by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tuneable, single-passband and dual-passband spectral filters and (2) spatially controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep-learning-based design strategy, broadband diffractive neural networks help us engineer the light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41377-019-0223-1DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6885516PMC
December 2019

Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning.

Nat Methods 2019 12 4;16(12):1323-1331. Epub 2019 Nov 4.

Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA.

We demonstrate that a deep neural network can be trained to virtually refocus a two-dimensional fluorescence image onto user-defined three-dimensional (3D) surfaces within the sample. Using this method, termed Deep-Z, we imaged the neuronal activity of a Caenorhabditis elegans worm in 3D using a time sequence of fluorescence images acquired at a single focal plane, digitally increasing the depth-of-field by 20-fold without any axial scanning, additional hardware or a trade-off of imaging resolution and speed. Furthermore, we demonstrate that this approach can correct for sample drift, tilt and other aberrations, all digitally performed after the acquisition of a single fluorescence image. This framework also cross-connects different imaging modalities to each other, enabling 3D refocusing of a single wide-field fluorescence image to match confocal microscopy images acquired at different sample planes. Deep-Z has the potential to improve volumetric imaging speed while reducing challenges relating to sample drift, aberration and defocusing that are associated with standard 3D fluorescence microscopy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41592-019-0622-5DOI Listing
December 2019

Computational cytometer based on magnetically modulated coherent imaging and deep learning.

Light Sci Appl 2019 2;8:91. Epub 2019 Oct 2.

1Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA.

Detecting rare cells within blood has numerous applications in disease diagnostics. Existing rare cell detection techniques are typically hindered by their high cost and low throughput. Here, we present a computational cytometer based on magnetically modulated lensless speckle imaging, which introduces oscillatory motion to the magnetic-bead-conjugated rare cells of interest through a periodic magnetic force and uses lensless time-resolved holographic speckle imaging to rapidly detect the target cells in three dimensions (3D). In addition to using cell-specific antibodies to magnetically label target cells, detection specificity is further enhanced through a deep-learning-based classifier that is based on a densely connected pseudo-3D convolutional neural network (P3D CNN), which automatically detects rare cells of interest based on their spatio-temporal features under a controlled magnetic force. To demonstrate the performance of this technique, we built a high-throughput, compact and cost-effective prototype for detecting MCF7 cancer cells spiked in whole blood samples. Through serial dilution experiments, we quantified the limit of detection (LoD) as 10 cells per millilitre of whole blood, which could be further improved through multiplexing parallel imaging channels within the same instrument. This compact, cost-effective and high-throughput computational cytometer can potentially be used for rare cell detection and quantification in bodily fluids for a variety of biomedical applications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41377-019-0203-5DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6804677PMC
October 2019

Deep learning in holography and coherent imaging.

Light Sci Appl 2019 11;8:85. Epub 2019 Sep 11.

1Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA.

Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance. Through data-driven approaches, these emerging techniques have overcome some of the challenges associated with existing holographic image reconstruction methods while also minimizing the hardware requirements of holography. These recent advances open up a myriad of new opportunities for the use of coherent imaging systems in biomedical and engineering research and related applications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41377-019-0196-0DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6804620PMC
September 2019

Computational Image Analysis of Guided Acoustic Waves Enables Rheological Assessment of Sub-nanoliter Volumes.

ACS Nano 2019 10 19;13(10):11062-11069. Epub 2019 Sep 19.

We present a method for the computational image analysis of high frequency guided sound waves based upon the measurement of optical interference fringes, produced at the air interface of a thin film of liquid. These acoustic actuations induce an affine deformation of the liquid, creating a lensing effect that can be readily observed using a simple imaging system. We exploit this effect to measure and analyze the spatiotemporal behavior of the thin liquid film as the acoustic wave interacts with it. We also show that, by investigating the dynamics of the relaxation processes of these deformations when actuation ceases, we are able to determine the liquid's viscosity using just a lens-free imaging system and a simple disposable biochip. Contrary to all other acoustic-based techniques in rheology, our measurements do not require monitoring of the wave parameters to obtain quantitative values for fluid viscosities, for sample volumes as low as 200 pL. We envisage that the proposed methods could enable high throughput, chip-based, reagent-free rheological studies within very small samples.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1021/acsnano.9b03219DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6812326PMC
October 2019