Publications by authors named "Danying Hu"

8 Publications

  • Page 1 of 1

Scanning Fiber Endoscope Improves Detection of 5-Aminolevulinic Acid-Induced Protoporphyrin IX Fluorescence at the Boundary of Infiltrative Glioma.

World Neurosurg 2018 May 2;113:e51-e69. Epub 2018 Feb 2.

Department of Neurosurgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, Arizona, USA. Electronic address:

Objective: Fluorescence-guided surgery with protoporphyrin IX (PpIX) as a photodiagnostic marker is gaining acceptance for resection of malignant gliomas. Current wide-field imaging technologies do not have sufficient sensitivity to detect low PpIX concentrations. We evaluated a scanning fiber endoscope (SFE) for detection of PpIX fluorescence in gliomas and compared it to an operating microscope (OPMI) equipped with a fluorescence module and to a benchtop confocal laser scanning microscope (CLSM).

Methods: 5-Aminolevulinic acid-induced PpIX fluorescence was assessed in GL261-Luc2 cells in vitro and in vivo after implantation in mouse brains, at an invading glioma growth stage, simulating residual tumor. Intraoperative fluorescence of high and low PpIX concentrations in normal brain and tumor regions with SFE, OPMI, CLSM, and histopathology were compared.

Results: SFE imaging of PpIX correlated to CLSM at the cellular level. PpIX accumulated in normal brain cells but significantly less than in glioma cells. SFE was more sensitive to accumulated PpIX in fluorescent brain areas than OPMI (P < 0.01) and dramatically increased imaging time (>6×) before tumor-to-background contrast was diminished because of photobleaching.

Conclusions: SFE provides new endoscopic capabilities to view PpIX-fluorescing tumor regions at cellular resolution. SFE may allow accurate imaging of 5-aminolevulinic acid labeling of gliomas and other tumor types when current detection techniques have failed to provide reliable visualization. SFE was significantly more sensitive than OPMI to low PpIX concentrations, which is relevant to identifying the leading edge or metastasizing cells of malignant glioma or to treating low-grade gliomas. This new application has the potential to benefit surgical outcomes.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.wneu.2018.01.151DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5924630PMC
May 2018

Semi-autonomous image-guided brain tumour resection using an integrated robotic system: A bench-top study.

Int J Med Robot 2018 Feb 3;14(1). Epub 2017 Nov 3.

Biorobotics Laboratory, Department of Electrical Engineering, University of Washington, Seattle, WA, USA.

Background: Complete brain tumour resection is an extremely critical factor for patients' survival rate and long-term quality of life. This paper introduces a prototype medical robotic system that aims to automatically detect and clean up brain tumour residues after the removal of tumour bulk through conventional surgery.

Methods: We focus on the development of an integrated surgical robotic system for image-guided robotic brain surgery. The Behavior Tree framework is explored to coordinate cross-platform medical subtasks.

Results: The integrated system was tested on a simulated laboratory platform. Results and performance indicate the feasibility of supervised semi-automation for residual brain tumour ablation in a simulated surgical cavity with sub-millimetre accuracy. The modularity in the control architecture allows straightforward integration of further medical devices.

Conclusions: This work presents a semi-automated laboratory setup, simulating an intraoperative robotic neurosurgical procedure with real-time endoscopic image guidance and provides a foundation for the future transition from engineering approaches to clinical application.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/rcs.1872DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5762424PMC
February 2018

Toward real-time tumor margin identification in image-guided robotic brain tumor resection.

Proc SPIE Int Soc Opt Eng 2017 Feb 3;10135. Epub 2017 Mar 3.

Human Photonics Lab, Dept. of Mechanical Engr., Univ. of Washington, Seattle, WA 98195.

For patients with malignant brain tumors (glioblastomas), a safe maximal resection of tumor is critical for an increased survival rate. However, complete resection of the cancer is hard to achieve due to the invasive nature of these tumors, where the margins of the tumors become blurred from frank tumor to more normal brain tissue, but in which single cells or clusters of malignant cells may have invaded. Recent developments in fluorescence imaging techniques have shown great potential for improved surgical outcomes by providing surgeons intraoperative contrast-enhanced visual information of tumor in neurosurgery. The current near-infrared (NIR) fluorophores, such as indocyanine green (ICG), cyanine5.5 (Cy5.5), 5-aminolevulinic acid (5-ALA)-induced protoporphyrin IX (PpIX), are showing clinical potential to be useful in targeting and guiding resections of such tumors. Real-time tumor margin identification in NIR imaging could be helpful to both surgeons and patients by reducing the operation time and space required by other imaging modalities such as intraoperative MRI, and has the potential to integrate with robotically assisted surgery. In this paper, a segmentation method based on the Chan-Vese model was developed for identifying the tumor boundaries in an ex-vivo mouse brain from relatively noisy fluorescence images acquired by a multimodal scanning fiber endoscope (mmSFE). Tumor contours were achieved iteratively by minimizing an energy function formed by a level set function and the segmentation model. Quantitative segmentation metrics based on tumor-to-background (T/B) ratio were evaluated. Results demonstrated feasibility in detecting the brain tumor margins at quasi-real-time and has the potential to yield improved precision brain tumor resection techniques or even robotic interventions in the future.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/12.2255417DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8315009PMC
February 2017

Path Planning for Semi-automated Simulated Robotic Neurosurgery.

Rep U S 2015 Sep-Oct;2015:2639-2645

Human Photonics Laboratory, Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA.

This paper considers the semi-automated robotic surgical procedure for removing the brain tumor margins, where the manual operation is a tedious and time-consuming task for surgeons. We present robust path planning methods for robotic ablation of tumor residues in various shapes, which are represented in point-clouds instead of analytical geometry. Along with the path plans, corresponding metrics are also delivered to the surgeon for selecting the optimal candidate in the automated robotic ablation. The selected path plan is then executed and tested on RAVEN II surgical robot platform as part of the semi-automated robotic brain tumor ablation surgery in a simulated tissue phantom.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/IROS.2015.7353737DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4687488PMC
December 2015

Semi-autonomous Simulated Brain Tumor Ablation with RavenII Surgical Robot using Behavior Tree.

IEEE Int Conf Robot Autom 2015 May;2015:3868-3875

Human Photonics Laboratory, Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA.

Medical robots have been widely used to assist surgeons to carry out dexterous surgical tasks via various ways. Most of the tasks require surgeon's operation directly or indirectly. Certain level of autonomy in robotic surgery could not only free the surgeon from some tedious repetitive tasks, but also utilize the advantages of robot: high dexterity and accuracy. This paper presents a semi-autonomous neurosurgical procedure of brain tumor ablation using RAVEN Surgical Robot and stereo visual feedback. By integrating with the behavior tree framework, the whole surgical task is modeled flexibly and intelligently as nodes and leaves of a behavior tree. This paper provides three contributions mainly: (1) describing the brain tumor ablation as an ideal candidate for autonomous robotic surgery, (2) modeling and implementing the semi-autonomous surgical task using behavior tree framework, and (3) designing an experimental simulated ablation task for feasibility study and robot performance analysis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1109/ICRA.2015.7139738DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4578323PMC
May 2015

Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot.

J Med Imaging (Bellingham) 2014 Oct 2;1(3):035002. Epub 2014 Dec 2.

University of Washington , Department of Mechanical Engineering, Human Photonics Laboratory, Seattle, Washington 98195, United States.

Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/1.JMI.1.3.035002DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4478723PMC
October 2014

Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model.

Proc SPIE Int Soc Opt Eng 2015;9415:94150C

Human Photonics Lab, Dept. of Mechanical Engineering, Univ. of Washington, Seattle, WA 98195.

The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for robotically-assisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/12.2082872DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4376325PMC
January 2015

Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm.

Proc SPIE Int Soc Opt Eng 2014 Feb 12;9036. Epub 2014 Mar 12.

Human Photonics Lab, Dept. of Mechanical Engineering, Univ. of Washington, Seattle, WA 98195.

Endoscopic visualization in brain tumor removal is challenging because tumor tissue is often visually indistinguishable from healthy tissue. Fluorescence imaging can improve tumor delineation, though this impairs reflectance-based visualization of gross anatomical features. To accurately navigate and resect tumors, we created an ultrathin/flexible, scanning fiber endoscope (SFE) that acquires reflectance and fluorescence wide-field images at high-resolution. Furthermore, our miniature imaging system is affixed to a robotic arm providing programmable motion of SFE, from which we generate multimodal surface maps of the surgical field. To test this system, synthetic phantoms of debulked tumor from brain are fabricated having spots of fluorescence representing residual tumor. Three-dimension (3D) surface maps of this surgical field are produced by moving the SFE over the phantom during concurrent reflectance and fluorescence imaging (30Hz video). SIFT-based feature matching between reflectance images is implemented to select a subset of key frames, which are reconstructed in 3D by bundle adjustment. The resultant reconstruction yields a multimodal 3D map of the tumor region that can improve visualization and robotic path planning. Efficiency of creating these 3D maps is important as they are generated multiple times during tumor margin clean-up. By using pre-programmed motions of the robot arm holding the SFE, the computer vision algorithms are optimized for efficiency by reducing search times. Preliminary results indicate that the time for creating these multimodal maps of the surgical field can be reduced to one third by using known trajectories of the surgical robot moving the image-guided tool.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1117/12.2044165DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8315033PMC
February 2014
-->