Publications by authors named "Shizuo Kaji"

7 Publications

  • Page 1 of 1

[Recent Review Articles in Radiological Physics and Technology].

Nihon Hoshasen Gijutsu Gakkai Zasshi 2020 ;76(11):1207-1210

Faculty of Engineering, Gifu University.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.6009/jjrt.2020_JSRT_76.11.1207DOI Listing
December 2020

[Improvement in Image Quality of CBCT during Treatment by Cycle Generative Adversarial Network].

Nihon Hoshasen Gijutsu Gakkai Zasshi 2020 ;76(11):1173-1184

Department of Radiology, University of Tokyo Hospital.

Purpose: Volumetric modulated arc therapy (VMAT) can acquire projection images during rotational irradiation, and cone-beam computed tomography (CBCT) images during VMAT delivery can be reconstructed. The poor quality of CBCT images prevents accurate recognition of organ position during the treatment. The purpose of this study was to improve the image quality of CBCT during the treatment by cycle generative adversarial network (CycleGAN).

Method: Twenty patients with clinically localized prostate cancer were treated with VMAT, and projection images for intra-treatment CBCT (iCBCT) were acquired. Synthesis of PCT (SynPCT) with improved image quality by CycleGAN requires only unpaired and unaligned iCBCT and planning CT (PCT) images for training. We performed visual and quantitative evaluation to compare iCBCT, SynPCT and PCT deformable image registration (DIR) to confirm the clinical usefulness.

Result: We demonstrated suitable CycleGAN networks and hyperparameters for SynPCT. The image quality of SynPCT improved visually and quantitatively while preserving anatomical structures of the original iCBCT. The undesirable deformation of PCT was reduced when SynPCT was used as its reference instead of iCBCT.

Conclusion: We have performed image synthesis with preservation of organ position by CycleGAN for iCBCT and confirmed the clinical usefulness.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.6009/jjrt.2020_JSRT_76.11.1173DOI Listing
November 2020

Assessment of dysplasia in bone marrow smear with convolutional neural network.

Sci Rep 2020 09 7;10(1):14734. Epub 2020 Sep 7.

Department of Hemato-Oncology, International Medical Center, Saitama Medical University, Saitama, Japan.

In this study, we developed the world's first artificial intelligence (AI) system that assesses the dysplasia of blood cells on bone marrow smears and presents the result of AI prediction for one of the most representative dysplasia-decreased granules (DG). We photographed field images from the bone marrow smears from patients with myelodysplastic syndrome (MDS) or non-MDS diseases and cropped each cell using an originally developed cell detector. Two morphologists labelled each cell. The degree of dysplasia was evaluated on a four-point scale: 0-3 (e.g., neutrophil with severely decreased granules were labelled DG3). We then constructed the classifier from the dataset of labelled images. The detector and classifier were based on a deep neural network pre-trained with natural images. We obtained 1797 labelled images, and the morphologists determined 134 DGs (DG1: 46, DG2: 77, DG3: 11). Subsequently, we performed a five-fold cross-validation to evaluate the performance of the classifier. For DG1-3 labelled by morphologists, the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy were 91.0%, 97.7%, 76.3%, 99.3%, and 97.2%, respectively. When DG1 was excluded in the process, the sensitivity, specificity, PPV, NPV, and accuracy were 85.2%, 98.9%, 80.6%, and 99.2% and 98.2%, respectively.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-020-71752-xDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7477564PMC
September 2020

Visual enhancement of Cone-beam CT by use of CycleGAN.

Med Phys 2020 Mar 3;47(3):998-1010. Epub 2020 Jan 3.

Department of Radiology, University of Tokyo Hospital, Tokyo, 113-8655, Japan.

Purpose: Cone-beam computed tomography (CBCT) offers advantages over conventional fan-beam CT in that it requires a shorter time and less exposure to obtain images. However, CBCT images suffer from low soft-tissue contrast, noise, and artifacts compared to conventional fan-beam CT images. Therefore, it is essential to improve the image quality of CBCT.

Methods: In this paper, we propose a synthetic approach to translate CBCT images with deep neural networks. Our method requires only unpaired and unaligned CBCT images and planning fan-beam CT (PlanCT) images for training. The CBCT images and PlanCT images may be obtained from other patients as long as they are acquired with the same scanner settings. Once trained, three-dimensionally reconstructed CBCT images can be directly translated into high-quality PlanCT-like images.

Results: We demonstrate the effectiveness of our method with images obtained from 20 prostate patients, and provide a statistical and visual comparison. The image quality of the translated images shows substantial improvement in voxel values, spatial uniformity, and artifact suppression compared to those of the original CBCT. The anatomical structures of the original CBCT images were also well preserved in the translated images.

Conclusions: Our method produces visually PlanCT-like images from CBCT images while preserving anatomical structures.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/mp.13963DOI Listing
March 2020

Overview of image-to-image translation by use of deep neural networks: denoising, super-resolution, modality conversion, and reconstruction in medical imaging.

Radiol Phys Technol 2019 Sep 20;12(3):235-248. Epub 2019 Jun 20.

LPixel Inc., Tokyo, Japan.

Since the advent of deep convolutional neural networks (DNNs), computer vision has seen an extremely rapid progress that has led to huge advances in medical imaging. Every year, many new methods are reported at conferences such as the International Conference on Medical Image Computing and Computer-Assisted Intervention and Machine Learning for Medical Image Reconstruction, or published online at the preprint server arXiv. There is a plethora of surveys on applications of neural networks in medical imaging (see [1] for a relatively recent comprehensive survey). This article does not aim to cover all aspects of the field, but focuses on a particular topic, image-to-image translation. Although the topic may not sound familiar, it turns out that many seemingly irrelevant applications can be understood as instances of image-to-image translation. Such applications include (1) noise reduction, (2) super-resolution, (3) image synthesis, and (4) reconstruction. The same underlying principles and algorithms work for various tasks. Our aim is to introduce some of the key ideas on this topic from a uniform viewpoint. We introduce core ideas and jargon that are specific to image processing by use of DNNs. Having an intuitive grasp of the core ideas of applications of neural networks in medical imaging and a knowledge of technical terms would be of great help to the reader for understanding the existing and future applications. Most of the recent applications which build on image-to-image translation are based on one of two fundamental architectures, called pix2pix and CycleGAN, depending on whether the available training data are paired or unpaired (see Sect. 1.3). We provide codes ([2, 3]) which implement these two architectures with various enhancements. Our codes are available online with use of the very permissive MIT license. We provide a hands-on tutorial for training a model for denoising based on our codes (see Sect. 6). We hope that this article, together with the codes, will provide both an overview and the details of the key algorithms and that it will serve as a basis for the development of new applications.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s12194-019-00520-yDOI Listing
September 2019

A circuit-preserving mapping from multilevel to Boolean dynamics.

J Theor Biol 2018 03 24;440:71-79. Epub 2017 Dec 24.

Department of Mathematics, Yamaguchi University, 1677-1, Yoshida, Yamaguchi 753-8512, Japan. Electronic address:

Many discrete models of biological networks rely exclusively on Boolean variables and many tools and theorems are available for analysis of strictly Boolean models. However, multilevel variables are often required to account for threshold effects, in which knowledge of the Boolean case does not generalise straightforwardly. This motivated the development of conversion methods for multilevel to Boolean models. In particular, Van Ham's method has been shown to yield a one-to-one, neighbour and regulation preserving dynamics, making it the de facto standard approach to the problem. However, Van Ham's method has several drawbacks: most notably, it introduces vast regions of "non-admissible" states that have no counterpart in the multilevel, original model. This raises special difficulties for the analysis of interaction between variables and circuit functionality, which is believed to be central to the understanding of dynamic properties of logical models. Here, we propose a new multilevel to Boolean conversion method, with software implementation. Contrary to Van Ham's, our method doesn't yield a one-to-one transposition of multilevel trajectories; however, it maps each and every Boolean state to a specific multilevel state, thus getting rid of the non-admissible regions and, at the expense of (apparently) more complicated, "parallel" trajectories. One of the prominent features of our method is that it preserves dynamics and interaction of variables in a certain manner. As a demonstration of the usability of our method, we apply it to construct a new Boolean counter-example to the well-known conjecture that a local negative circuit is necessary to generate sustained oscillations. This result illustrates the general relevance of our method for the study of multilevel logical models.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jtbi.2017.12.013DOI Listing
March 2018