Publications by authors named "Yuanzheng Gong"

17 Publications

  • Page 1 of 1

Research on accounting and detection of volatile organic compounds from a typical petroleum refinery in Hebei, North China.

Chemosphere 2021 Oct 11;281:130653. Epub 2021 May 11.

State Joint Key Laboratory of Environmental Simulation and Pollution Control, College of Environmental Sciences and Engineering, Peking University, Beijing, 100871, PR China.

A volatile organic compound (VOC) emissions inventory of the petroleum refinery in Hebei was established. This refinery emits 1859.2 tons of VOCs per year, with wastewater collection and treatment system being the largest emissions source, accounting for 59.6% individually, followed by the recirculating cooling water system (13.4%), storage tanks (11.1%), and equipment leaks (9.4%). Organized and fugitive samples were collected simultaneously for different processes of each emissions source. A total of 100 VOC species were characterized and quantified using a gas chromatography-mass spectrometry/flame ionization detection system. The VOC emissions concentrations and chemical composition of each process were quite different. Most of the processes used alkanes as the main chemome. We concluded from the composite source profile weighted by the amount of VOC emissions that the characteristic species of this petroleum refinery were ethane (15.4%), propylene (11.7%), propane (8.5%), iso-pentane (8.3%), and toluene (4.7%). The ozone (O) formation potential (OFP) and secondary organic aerosol formation potential (SOAP) were evaluated, and the results indicated that alkenes (mainly propylene) and aromatics (mainly toluene) were the priority control compounds. This study clarifies the current status of VOC emissions in the refinery in terms of emissions intensity, emissions components, and O and SOA reactivity. The key emissions sources and species screened provide scientific support for reducing refined emissions from the petrochemical industry.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.chemosphere.2021.130653DOI Listing
October 2021

Semi-autonomous image-guided brain tumour resection using an integrated robotic system: A bench-top study.

Int J Med Robot 2018 Feb 3;14(1). Epub 2017 Nov 3.

Biorobotics Laboratory, Department of Electrical Engineering, University of Washington, Seattle, WA, USA.

Background: Complete brain tumour resection is an extremely critical factor for patients' survival rate and long-term quality of life. This paper introduces a prototype medical robotic system that aims to automatically detect and clean up brain tumour residues after the removal of tumour bulk through conventional surgery.

Methods: We focus on the development of an integrated surgical robotic system for image-guided robotic brain surgery. The Behavior Tree framework is explored to coordinate cross-platform medical subtasks.

Results: The integrated system was tested on a simulated laboratory platform. Results and performance indicate the feasibility of supervised semi-automation for residual brain tumour ablation in a simulated surgical cavity with sub-millimetre accuracy. The modularity in the control architecture allows straightforward integration of further medical devices.

Conclusions: This work presents a semi-automated laboratory setup, simulating an intraoperative robotic neurosurgical procedure with real-time endoscopic image guidance and provides a foundation for the future transition from engineering approaches to clinical application.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/rcs.1872DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5762424PMC
February 2018

Toward real-time quantification of fluorescence molecular probes using target/background ratio for guiding biopsy and endoscopic therapy of esophageal neoplasia.

J Med Imaging (Bellingham) 2017 Apr 24;4(2):024502. Epub 2017 May 24.

University of Washington, Department of Mechanical Engineering, Human Photonics Lab, Seattle, Washington, United States.

Multimodal endoscopy using fluorescence molecular probes is a promising method of surveying the entire esophagus to detect cancer progression. Using the fluorescence ratio of a target compared to a surrounding background, a quantitative value is diagnostic for progression from Barrett's esophagus to high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC). However, current quantification of fluorescent images is done only after the endoscopic procedure. We developed a Chan-Vese-based algorithm to segment fluorescence targets, and subsequent morphological operations to generate background, thus calculating target/background (T/B) ratios, potentially to provide real-time guidance for biopsy and endoscopic therapy. With an initial processing speed of 2 fps and by calculating the T/B ratio for each frame, our method provides quasireal-time quantification of the molecular probe labeling to the endoscopist. Furthermore, an automatic computer-aided diagnosis algorithm can be applied to the recorded endoscopic video, and the overall T/B ratio is calculated for each patient. The receiver operating characteristic curve was employed to determine the threshold for classification of HGD/EAC using leave-one-out cross-validation. With 92% sensitivity and 75% specificity to classify HGD/EAC, our automatic algorithm shows promising results for a surveillance procedure to help manage esophageal cancer and other cancers inspected by endoscopy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/1.JMI.4.2.024502DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5443417PMC
April 2017

Aqueous Oxidations Started by TiO Photoinduced Holes Can Be a Rate-Determining Step.

Chem Asian J 2017 Aug 19;12(16):2048-2051. Epub 2017 Jul 19.

Key Laboratory of Photochemistry, National Laboratory for Molecular Sciences, CAS Research/Education Center for Excellence in Molecular Sciences, Institute of Chemistry, Chinese Academy of Sciences, Beijing, 100190, P.R. China.

In aqueous TiO photocatalytic hydroxylation of weakly polar aromatics, a series of inverse H/D KIEs of 0.7-0.8 were observed, which is different than the normal H/D kinetic isotope effects (KIEs) usually observed for polar aromatics. This result indicated that the oxidation started by photo-induced h can be the rate-determining step.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/asia.201700658DOI Listing
August 2017

Feature-based three-dimensional registration for repetitive geometry in machine vision.

J Inf Technol Softw Eng 2016 Aug 26;6(4). Epub 2016 Aug 26.

Mechanical Engineering Department, University of Washington, Seattle, Washington, USA, 98195.

As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.4172/2165-7866.1000184DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5341792PMC
August 2016

Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration.

Opt Eng 2017 Jan 30;56(1). Epub 2017 Jan 30.

University of Washington, Mechanical Engineering Department, Seattle, Washington, United States.

Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/1.OE.56.1.014108DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5341795PMC
January 2017

Toward real-time tumor margin identification in image-guided robotic brain tumor resection.

Proc SPIE Int Soc Opt Eng 2017 Feb 3;10135. Epub 2017 Mar 3.

Human Photonics Lab, Dept. of Mechanical Engr., Univ. of Washington, Seattle, WA 98195.

For patients with malignant brain tumors (glioblastomas), a safe maximal resection of tumor is critical for an increased survival rate. However, complete resection of the cancer is hard to achieve due to the invasive nature of these tumors, where the margins of the tumors become blurred from frank tumor to more normal brain tissue, but in which single cells or clusters of malignant cells may have invaded. Recent developments in fluorescence imaging techniques have shown great potential for improved surgical outcomes by providing surgeons intraoperative contrast-enhanced visual information of tumor in neurosurgery. The current near-infrared (NIR) fluorophores, such as indocyanine green (ICG), cyanine5.5 (Cy5.5), 5-aminolevulinic acid (5-ALA)-induced protoporphyrin IX (PpIX), are showing clinical potential to be useful in targeting and guiding resections of such tumors. Real-time tumor margin identification in NIR imaging could be helpful to both surgeons and patients by reducing the operation time and space required by other imaging modalities such as intraoperative MRI, and has the potential to integrate with robotically assisted surgery. In this paper, a segmentation method based on the Chan-Vese model was developed for identifying the tumor boundaries in an ex-vivo mouse brain from relatively noisy fluorescence images acquired by a multimodal scanning fiber endoscope (mmSFE). Tumor contours were achieved iteratively by minimizing an energy function formed by a level set function and the segmentation model. Quantitative segmentation metrics based on tumor-to-background (T/B) ratio were evaluated. Results demonstrated feasibility in detecting the brain tumor margins at quasi-real-time and has the potential to yield improved precision brain tumor resection techniques or even robotic interventions in the future.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/12.2255417DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8315009PMC
February 2017

Mechanistic Studies of TiO Photocatalysis and Fenton Degradation of Hydrophobic Aromatic Pollutants in Water.

Chem Asian J 2016 Dec 22;11(24):3568-3574. Epub 2016 Nov 22.

Key Laboratory of Photochemistry, National Laboratory for Molecular Sciences, Institute of Chemistry, Chinese Academy of Sciences, Beijing, 100190, P.R. China.

HO-adduct radicals have been investigated and confirmed as the common initial intermediates in TiO photocatalysis and Fenton degradations of water-insoluble aromatics. However, the evolution of HO-adduct radicals to phenols has not been completely clarified. When 4-d-toluene and p-xylene were degraded by TiO photocatalysis and Fenton reactions, respectively, a portion of the 4-deuterium or 4-CH group (18-100 %) at the attacked ipso position shifted to the adjacent position of the ring in the formed phenols (NIH shift; NIH is short for the National Institutes of Health, to honor the place where this phenomenon was first discovered). The results, combined with the observation of a key dienyl cationic intermediate by in situ attenuated total reflectance FTIR spectroscopy, indicate that, for the evolution of HO-adduct radicals, a mixed mechanism of both the carbocation intermediate pathway and O -capturing pathway occurs in both aqueous TiO photocatalysis and aqueous Fenton reactions.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/asia.201601299DOI Listing
December 2016

Path Planning for Semi-automated Simulated Robotic Neurosurgery.

Rep U S 2015 Sep-Oct;2015:2639-2645

Human Photonics Laboratory, Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA.

This paper considers the semi-automated robotic surgical procedure for removing the brain tumor margins, where the manual operation is a tedious and time-consuming task for surgeons. We present robust path planning methods for robotic ablation of tumor residues in various shapes, which are represented in point-clouds instead of analytical geometry. Along with the path plans, corresponding metrics are also delivered to the surgeon for selecting the optimal candidate in the automated robotic ablation. The selected path plan is then executed and tested on RAVEN II surgical robot platform as part of the semi-automated robotic brain tumor ablation surgery in a simulated tissue phantom.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/IROS.2015.7353737DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4687488PMC
December 2015

Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope.

Int J Optomechatronics 2015;9(3):238-247. Epub 2015 Jun 24.

Human Photonics Laboratory, Department of Mechanical Engineering, University of Washington, Seattle, USA.

As the rapid progress in the development of optoelectronic components and computational power, 3D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This paper proposed a new approach to measure tiny internal 3D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/15599612.2015.1059535DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4670032PMC
June 2015

Semi-autonomous Simulated Brain Tumor Ablation with RavenII Surgical Robot using Behavior Tree.

IEEE Int Conf Robot Autom 2015 May;2015:3868-3875

Human Photonics Laboratory, Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA.

Medical robots have been widely used to assist surgeons to carry out dexterous surgical tasks via various ways. Most of the tasks require surgeon's operation directly or indirectly. Certain level of autonomy in robotic surgery could not only free the surgeon from some tedious repetitive tasks, but also utilize the advantages of robot: high dexterity and accuracy. This paper presents a semi-autonomous neurosurgical procedure of brain tumor ablation using RAVEN Surgical Robot and stereo visual feedback. By integrating with the behavior tree framework, the whole surgical task is modeled flexibly and intelligently as nodes and leaves of a behavior tree. This paper provides three contributions mainly: (1) describing the brain tumor ablation as an ideal candidate for autonomous robotic surgery, (2) modeling and implementing the semi-autonomous surgical task using behavior tree framework, and (3) designing an experimental simulated ablation task for feasibility study and robot performance analysis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/ICRA.2015.7139738DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4578323PMC
May 2015

Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot.

J Med Imaging (Bellingham) 2014 Oct 2;1(3):035002. Epub 2014 Dec 2.

University of Washington , Department of Mechanical Engineering, Human Photonics Laboratory, Seattle, Washington 98195, United States.

Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/1.JMI.1.3.035002DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4478723PMC
October 2014

Bound constrained bundle adjustment for reliable 3D reconstruction.

Opt Express 2015 Apr;23(8):10771-85

Bundle adjustment (BA) is a common estimation algorithm that is widely used in machine vision as the last step in a feature-based three-dimensional (3D) reconstruction algorithm. BA is essentially a non-convex non-linear least-square problem that can simultaneously solve the 3D coordinates of all the feature points describing the scene geometry, as well as the parameters of the camera. The conventional BA takes a parameter either as a fixed value or as an unconstrained variable based on whether the parameter is known or not. In cases where the known parameters are inaccurate but constrained in a range, conventional BA results in an incorrect 3D reconstruction by using these parameters as fixed values. On the other hand, these inaccurate parameters can be treated as unknown variables, but this does not exploit the knowledge of the constraints, and the resulting reconstruction can be erroneous since the BA optimization halts at a dramatically incorrect local minimum due to its non-convexity. In many practical 3D reconstruction applications, unknown variables with range constraints are usually available, such as a measurement with a range of uncertainty or a bounded estimate. Thus to better utilize these pre-known, constrained, but inaccurate parameters, a bound constrained bundle adjustment (BCBA) algorithm is proposed, developed and tested in this study. A scanning fiber endoscope (the camera) is used to capture a sequence of images above a surgery phantom (the object) of known geometry. 3D virtual models are reconstructed based on these images and then compared with the ground truth. The experimental results demonstrate BCBA can achieve a more reliable, rapid, and accurate 3D reconstruction than conventional bundle adjustment.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4523375PMC
http://dx.doi.org/10.1364/OE.23.010771DOI Listing
April 2015

Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model.

Proc SPIE Int Soc Opt Eng 2015;9415:94150C

Human Photonics Lab, Dept. of Mechanical Engineering, Univ. of Washington, Seattle, WA 98195.

The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for robotically-assisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/12.2082872DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4376325PMC
January 2015

Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm.

Proc SPIE Int Soc Opt Eng 2014 Feb 12;9036. Epub 2014 Mar 12.

Human Photonics Lab, Dept. of Mechanical Engineering, Univ. of Washington, Seattle, WA 98195.

Endoscopic visualization in brain tumor removal is challenging because tumor tissue is often visually indistinguishable from healthy tissue. Fluorescence imaging can improve tumor delineation, though this impairs reflectance-based visualization of gross anatomical features. To accurately navigate and resect tumors, we created an ultrathin/flexible, scanning fiber endoscope (SFE) that acquires reflectance and fluorescence wide-field images at high-resolution. Furthermore, our miniature imaging system is affixed to a robotic arm providing programmable motion of SFE, from which we generate multimodal surface maps of the surgical field. To test this system, synthetic phantoms of debulked tumor from brain are fabricated having spots of fluorescence representing residual tumor. Three-dimension (3D) surface maps of this surgical field are produced by moving the SFE over the phantom during concurrent reflectance and fluorescence imaging (30Hz video). SIFT-based feature matching between reflectance images is implemented to select a subset of key frames, which are reconstructed in 3D by bundle adjustment. The resultant reconstruction yields a multimodal 3D map of the tumor region that can improve visualization and robotic path planning. Efficiency of creating these 3D maps is important as they are generated multiple times during tumor margin clean-up. By using pre-programmed motions of the robot arm holding the SFE, the computer vision algorithms are optimized for efficiency by reducing search times. Preliminary results indicate that the time for creating these multimodal maps of the surgical field can be reduced to one third by using known trajectories of the surgical robot moving the image-guided tool.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/12.2044165DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8315033PMC
February 2014

Structured light imaging of epicardial mechanics.

Annu Int Conf IEEE Eng Med Biol Soc 2010 ;2010:5157-60

Washington University in St. Louis Department of Biomedical Engineering 1 Brookings Drive Campus Box 1097, Missouri 63130, USA.

There is a need for accurate measurements of mechanical strain and motion of the heart both in vitro and in vivo. We have developed a new structured-light imaging system capable of epicardial shape measurement at 333 fps at a resolution of 768 × 768 pixels. Here we present proof-of-concept data from our system applied to a beating rabbit heart in vitro to measure epicardial mechanics. This method will allow high resolution mapping of epicardial strain and virtual immobilization of the heart for removal of motion artifacts from epicardial recordings with fluorescence dyes. This will allow mapping of transmembrane potential and calcium transients in a beating heart, including in vivo.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/IEMBS.2010.5626117DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3784005PMC
March 2011

Ultrafast 3-D shape measurement with an off-the-shelf DLP projector.

Opt Express 2010 Sep;18(19):19743-54

Department of Mechanical Engineering, Iowa State University, Ames, IA 50011, USA.

This paper presents a technique that reaches 3-D shape measurement speed beyond the digital-light-processing (DLP) projector's projection speed. In particular, a "solid-state" binary structured pattern is generated with each micro-mirror pixel always being at one status (ON or OFF). By this means, any time segment of projection can represent the whole signal, thus the exposure time can be shorter than the projection time. A sinusoidal fringe pattern is generated by properly defocusing a binary one, and the Fourier fringe analysis means is used for 3-D shape recovery. We have successfully reached 4,000 Hz rate (80 μs exposure time) 3-D shape measurement speed with an off-the-shelf DLP projector.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.18.019743DOI Listing
September 2010
-->