Publications by authors named "Tomohiro Tada"

67 Publications

Automatic anatomical classification of colonoscopic images using deep convolutional neural networks.

Gastroenterol Rep (Oxf) 2021 Jun 7;9(3):226-233. Epub 2020 Dec 7.

Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan.

Background: A colonoscopy can detect colorectal diseases, including cancers, polyps, and inflammatory bowel diseases. A computer-aided diagnosis (CAD) system using deep convolutional neural networks (CNNs) that can recognize anatomical locations during a colonoscopy could efficiently assist practitioners. We aimed to construct a CAD system using a CNN to distinguish colorectal images from parts of the cecum, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum.

Method: We constructed a CNN by training of 9,995 colonoscopy images and tested its performance by 5,121 independent colonoscopy images that were categorized according to seven anatomical locations: the terminal ileum, the cecum, ascending colon to transverse colon, descending colon to sigmoid colon, the rectum, the anus, and indistinguishable parts. We examined images taken during total colonoscopy performed between January 2017 and November 2017 at a single center. We evaluated the concordance between the diagnosis by endoscopists and those by the CNN. The main outcomes of the study were the sensitivity and specificity of the CNN for the anatomical categorization of colonoscopy images.

Results: The constructed CNN recognized anatomical locations of colonoscopy images with the following areas under the curves: 0.979 for the terminal ileum; 0.940 for the cecum; 0.875 for ascending colon to transverse colon; 0.846 for descending colon to sigmoid colon; 0.835 for the rectum; and 0.992 for the anus. During the test process, the CNN system correctly recognized 66.6% of images.

Conclusion: We constructed the new CNN system with clinically relevant performance for recognizing anatomical locations of colonoscopy images, which is the first step in constructing a CAD system that will support us during colonoscopy and provide an assurance of the quality of the colonoscopy procedure.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1093/gastro/goaa078DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8309686PMC
June 2021

Ability of artificial intelligence to detect T1 esophageal squamous cell carcinoma from endoscopic videos and the effects of real-time assistance.

Sci Rep 2021 Apr 8;11(1):7759. Epub 2021 Apr 8.

Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan.

Diagnosis using artificial intelligence (AI) with deep learning could be useful in endoscopic examinations. We investigated the ability of AI to detect superficial esophageal squamous cell carcinoma (ESCC) from esophagogastroduodenoscopy (EGD) videos. We retrospectively collected 8428 EGD images of esophageal cancer to develop a convolutional neural network through deep learning. We evaluated the detection accuracy of the AI diagnosing system compared with that of 18 endoscopists. We used 144 EGD videos for the two validation sets. First, we used 64 EGD observation videos of ESCCs using both white light imaging (WLI) and narrow-band imaging (NBI). We then evaluated the system using 80 EGD videos from 40 patients (20 with superficial ESCC and 20 with non-ESCC). In the first set, the AI system correctly diagnosed 100% ESCCs. In the second set, it correctly detected 85% (17/20) ESCCs. Of these, 75% (15/20) and 55% (11/22) were detected by WLI and NBI, respectively, and the positive predictive value was 36.7%. The endoscopists correctly detected 45% (25-70%) ESCCs. With AI real-time assistance, the sensitivities of the endoscopists were significantly improved without AI assistance (p < 0.05). AI can detect superficial ESCCs from EGD videos with high sensitivity and the sensitivity of the endoscopist was improved with AI real-time support.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1038/s41598-021-87405-6DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8032773PMC
April 2021

Artificial intelligence diagnostic system predicts multiple Lugol-voiding lesions in the esophagus and patients at high risk for esophageal squamous cell carcinoma.

Endoscopy 2021 Feb 4. Epub 2021 Feb 4.

Department of Gastroenterology, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan.

Background: It is known that an esophagus with multiple Lugol-voiding lesions (LVLs) after iodine staining is high risk for esophageal cancer; however, it is preferable to identify high-risk cases without staining because iodine causes discomfort and prolongs examination times. This study assessed the capability of an artificial intelligence (AI) system to predict multiple LVLs from images that had not been stained with iodine as well as patients at high risk for esophageal cancer.

Methods: We constructed the AI system by preparing a training set of 6634 images from white-light and narrow-band imaging in 595 patients before they underwent endoscopic examination with iodine staining. Diagnostic performance was evaluated on an independent validation dataset (667 images from 72 patients) and compared with that of 10 experienced endoscopists.

Results: The sensitivity, specificity, and accuracy of the AI system to predict multiple LVLs were 84.4 %, 70.0 %, and 76.4 %, respectively, compared with 46.9 %, 77.5 %, and 63.9 %, respectively, for the endoscopists. The AI system had significantly higher sensitivity than 9/10 experienced endoscopists. We also identified six endoscopic findings that were significantly more frequent in patients with multiple LVLs; however, the AI system had greater sensitivity than these findings for the prediction of multiple LVLs. Moreover, patients with AI-predicted multiple LVLs had significantly more cancers in the esophagus and head and neck than patients without predicted multiple LVLs.

Conclusion: The AI system could predict multiple LVLs with high sensitivity from images without iodine staining. The system could enable endoscopists to apply iodine staining more judiciously.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1055/a-1334-4053DOI Listing
February 2021

Usefulness of an artificial intelligence system for the detection of esophageal squamous cell carcinoma evaluated with videos simulating overlooking situation.

Dig Endosc 2021 Jan 27. Epub 2021 Jan 27.

AI Medical Service Inc, Tokyo, Japan.

Objectives: Artificial intelligence (AI) systems have shown favorable performance in the detection of esophageal squamous cell carcinoma (ESCC). However, previous studies were limited by the quality of their validation methods. In this study, we evaluated the performance of an AI system with videos simulating situations in which ESCC has been overlooked.

Methods: We used 17,336 images from 1376 superficial ESCCs and 1461 images from 196 noncancerous and normal esophagi to construct the AI system. To record validation videos, the endoscope was passed through the esophagus at a constant speed without focusing on the lesion to simulate situations in which ESCC has been missed. Validation videos were evaluated by the AI system and 21 endoscopists.

Results: We prepared 100 video datasets, including 50 superficial ESCCs, 22 noncancerous lesions, and 28 normal esophagi. The AI system had sensitivity of 85.7% (54 of 63 ESCCs) and specificity of 40%. Initial evaluation by endoscopists conducted with plain video (without AI support) had average sensitivity of 75.0% (47.3 of 63 ESCC) and specificity of 91.4%. Subsequent evaluation by endoscopists was conducted with AI assistance, which improved their sensitivity to 77.7% (P = 0.00696) without changing their specificity (91.6%, P = 0.756).

Conclusions: Our AI system had high sensitivity for the detection of ESCC. As a support tool, the system has the potential to enhance detection of ESCC without reducing specificity. (UMIN000039645).
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/den.13934DOI Listing
January 2021

Artificial intelligence for cancer detection of the upper gastrointestinal tract.

Dig Endosc 2021 Jan 28;33(2):254-262. Epub 2020 Dec 28.

Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.

In recent years, artificial intelligence (AI) has been found to be useful to physicians in the field of image recognition due to three elements: deep learning (that is, CNN, convolutional neural network), a high-performance computer, and a large amount of digitized data. In the field of gastrointestinal endoscopy, Japanese endoscopists have produced the world's first achievements of CNN-based AI system for detecting gastric and esophageal cancers. This study reviews papers on CNN-based AI for gastrointestinal cancers, and discusses the future of this technology in clinical practice. Employing AI-based endoscopes would enable early cancer detection. The better diagnostic abilities of AI technology may be beneficial in early gastrointestinal cancers in which endoscopists have variable diagnostic abilities and accuracy. AI coupled with the expertise of endoscopists would increase the accuracy of endoscopic diagnosis.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/den.13897DOI Listing
January 2021

Challenging detection of hard-to-find gastric cancers with artificial intelligence-assisted endoscopy.

Gut 2021 Jun 18;70(6):1196-1198. Epub 2020 Aug 18.

Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1136/gutjnl-2020-322453DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8108284PMC
June 2021

Real-time assessment of video images for esophageal squamous cell carcinoma invasion depth using artificial intelligence.

J Gastroenterol 2020 Nov 10;55(11):1037-1045. Epub 2020 Aug 10.

AI Medical Service Inc., Tokyo, Japan.

Background: Although optimal treatment of superficial esophageal squamous cell carcinoma (SCC) requires accurate evaluation of cancer invasion depth, the current process is rather subjective and may vary by observer. We, therefore, aimed to develop an AI system to calculate cancer invasion depth.

Methods: We gathered and selected 23,977 images (6857 WLI and 17,120 NBI/BLI images) of pathologically proven superficial esophageal SCC from endoscopic videos and still images of superficial esophageal SCC taken in our facility, to use as a learning dataset. We annotated the images with information [such as magnified endoscopy (ME) or non-ME, pEP-LPM, pMM, pSM1, and pSM2-3 cancers] based on pathologic diagnosis of the resected specimens. We created a model using a convolutional neural network. Performance of the AI system was compared with that of invited experts who used the same validation video set, independent of the learning dataset.

Results: Accuracy, sensitivity, and specificity with non-magnified endoscopy (ME) were 87%, 50%, and 99% for the AI system and 85%, 45%, 97% for the experts. Accuracy, sensitivity, and specificity with ME were 89%, 71%, and 95% for the AI system and 84%, 42%, 97% for the experts.

Conclusions: Most diagnostic parameters were higher when done by the AI system than by the experts. These results suggest that our AI system could potentially provide useful support during endoscopies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s00535-020-01716-5DOI Listing
November 2020

Diagnosis of pharyngeal cancer on endoscopic video images by Mask region-based convolutional neural network.

Dig Endosc 2021 May 16;33(4):569-576. Epub 2020 Sep 16.

AI Medical Service Inc., Tokyo, Japan.

Objectives: We aimed to develop an artificial intelligence (AI) system for the real-time diagnosis of pharyngeal cancers.

Methods: Endoscopic video images and still images of pharyngeal cancer treated in our facility were collected. A total of 4559 images of pathologically proven pharyngeal cancer (1243 using white light imaging and 3316 using narrow-band imaging/blue laser imaging) from 276 patients were used as a training dataset. The AI system used a convolutional neural network (CNN) model typical of the type used to analyze visual imagery. Supervised learning was used to train the CNN. The AI system was evaluated using an independent validation dataset of 25 video images of pharyngeal cancer and 36 video images of normal pharynx taken at our hospital.

Results: The AI system diagnosed 23/25 (92%) pharyngeal cancers as cancers and 17/36 (47%) non-cancers as non-cancers. The transaction speed of the AI system was 0.03 s per image, which meets the required speed for real-time diagnosis. The sensitivity, specificity, and accuracy for the detection of cancer were 92%, 47%, and 66% respectively.

Conclusions: Our single-institution study showed that our AI system for diagnosing cancers of the pharyngeal region had promising performance with high sensitivity and acceptable specificity. Further training and improvement of the system are required with a larger dataset including multiple centers.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/den.13800DOI Listing
May 2021

Application of artificial intelligence using a convolutional neural network for diagnosis of early gastric cancer based on magnifying endoscopy with narrow-band imaging.

J Gastroenterol Hepatol 2021 Feb 28;36(2):482-489. Epub 2020 Jul 28.

AI Medical Service Inc., Tokyo, Japan.

Background And Aim: Magnifying endoscopy with narrow-band imaging (ME-NBI) has made a huge contribution to clinical practice. However, acquiring skill at ME-NBI diagnosis of early gastric cancer (EGC) requires considerable expertise and experience. Recently, artificial intelligence (AI), using deep learning and a convolutional neural network (CNN), has made remarkable progress in various medical fields. Here, we constructed an AI-assisted CNN computer-aided diagnosis (CAD) system, based on ME-NBI images, to diagnose EGC and evaluated the diagnostic accuracy of the AI-assisted CNN-CAD system.

Methods: The AI-assisted CNN-CAD system (ResNet50) was trained and validated on a dataset of 5574 ME-NBI images (3797 EGCs, 1777 non-cancerous mucosa and lesions). To evaluate the diagnostic accuracy, a separate test dataset of 2300 ME-NBI images (1430 EGCs, 870 non-cancerous mucosa and lesions) was assessed using the AI-assisted CNN-CAD system.

Results: The AI-assisted CNN-CAD system required 60 s to analyze 2300 test images. The overall accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the CNN were 98.7%, 98%, 100%, 100%, and 96.8%, respectively. All misdiagnosed images of EGCs were of low-quality or of superficially depressed and intestinal-type intramucosal cancers that were difficult to distinguish from gastritis, even by experienced endoscopists.

Conclusions: The AI-assisted CNN-CAD system for ME-NBI diagnosis of EGC could process many stored ME-NBI images in a short period of time and had a high diagnostic ability. This system may have great potential for future application to real clinical settings, which could facilitate ME-NBI diagnosis of EGC in practice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/jgh.15190DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7984440PMC
February 2021

Highly accurate artificial intelligence systems to predict the invasion depth of gastric cancer: efficacy of conventional white-light imaging, nonmagnifying narrow-band imaging, and indigo-carmine dye contrast imaging.

Gastrointest Endosc 2020 Oct 25;92(4):866-873.e1. Epub 2020 Jun 25.

Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.

Background And Aims: Diagnosing the invasion depth of gastric cancer (GC) is necessary to determine the optimal method of treatment. Although the efficacy of evaluating macroscopic features and EUS has been reported, there is a need for more accurate and objective methods. The primary aim of this study was to test the efficacy of novel artificial intelligence (AI) systems in predicting the invasion depth of GC.

Methods: A total of 16,557 images from 1084 cases of GC for which endoscopic resection or surgery was performed between January 2013 and June 2019 were extracted. Cases were randomly assigned to training and test datasets at a ratio of 4:1. Through transfer learning leveraging a convolutional neural network architecture, ResNet50, 3 independent AI systems were developed. Each system was trained to predict the invasion depth of GC using conventional white-light imaging (WLI), nonmagnifying narrow-band imaging (NBI), and indigo-carmine dye contrast imaging (Indigo).

Results: The area under the curve of the WLI AI system was .9590. The lesion-based sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of the WLI AI system were 84.4%, 99.4%, 94.5%, 98.5%, and 92.9%, respectively. The lesion-based accuracies of the WLI, NBI, and Indigo AI systems were 94.5%, 94.3%, and 95.5%, respectively, with no significant difference.

Conclusions: These new AI systems trained with multiple images from different angles and distances could predict the invasion depth of GC with high accuracy. The lesion-based accuracy of the WLI, NBI, and Indigo AI systems was not significantly different.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.gie.2020.06.047DOI Listing
October 2020

Stratification of gastric cancer risk using a deep neural network.

JGH Open 2020 Jun 26;4(3):466-471. Epub 2019 Dec 26.

AI Medical Service Inc Tokyo Japan.

Background And Aim: Stratifying gastric cancer (GC) risk and endoscopy findings in high-risk individuals may provide effective surveillance for GC. We developed a computerized image- analysis system for endoscopic images to stratify the risk of GC.

Methods: The system was trained using images taken during endoscopic examinations with non-magnified white-light imaging. Patients were classified as high-risk (patients with GC), moderate-risk (patients with current or past infection or gastric atrophy), or low-risk (patients with no history of infection or gastric atrophy). After selection, 20,960, 17,404, and 68,920 images were collected as training images for the high-, moderate-, and low-risk groups, respectively.

Results: Performance of the artificial intelligence (AI) system was evaluated by the prevalence of GC in each group using an independent validation dataset of patients who underwent endoscopic examination and serum antibody testing. In total, 12,824 images from 454 patients were included in the analysis. The time required for diagnosing all the images was 345 seconds. The AI system diagnosed 46, 250, and 158 patients as low-, moderate-, and high risk, respectively. The prevalence of GC in the low-, moderate-, and high-risk groups was 2.2, 8.8, and 16.4%, respectively ( = 0.0017). Three experienced endoscopists also successfully stratified the risk; however, interobserver agreement was not satisfactory (kappa value of 0.27, indicating fair agreement).

Conclusion: The current AI system detected significant differences in the prevalence of GC among the low-, moderate-, and high-risk groups, suggesting its potential for stratifying GC risk.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1002/jgh3.12281DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7273725PMC
June 2020

Artificial intelligence for the detection of esophageal and esophagogastric junctional adenocarcinoma.

J Gastroenterol Hepatol 2021 Jan 27;36(1):131-136. Epub 2020 Jun 27.

Engineering, AI Medical Service Inc., Tokyo, Japan.

Background And Aim: Conventional endoscopy for the early detection of esophageal and esophagogastric junctional adenocarcinoma (E/J cancer) is limited because early lesions are asymptomatic, and the associated changes in the mucosa are subtle. There are no reports on artificial intelligence (AI) diagnosis for E/J cancer from Asian countries. Therefore, we aimed to develop a computerized image analysis system using deep learning for the detection of E/J cancers.

Methods: A total of 1172 images from 166 pathologically proven superficial E/J cancer cases and 2271 images of normal mucosa in esophagogastric junctional from 219 cases were used as the training image data. A total of 232 images from 36 cancer cases and 43 non-cancerous cases were used as the validation test data. The same validation test data were diagnosed by 15 board-certified specialists (experts).

Results: The sensitivity, specificity, and accuracy of the AI system were 94%, 42%, and 66%, respectively, and that of the experts were 88%, 43%, and 63%, respectively. The sensitivity of the AI system was favorable, while its specificity for non-cancerous lesions was similar to that of the experts. Interobserver agreement among the experts for detecting superficial E/J was fair (Fleiss' kappa = 0.26, z = 20.4, P < 0.001).

Conclusions: Our AI system achieved high sensitivity and acceptable specificity for the detection of E/J cancers and may be a good supporting tool for the screening of E/J cancers.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/jgh.15136DOI Listing
January 2021

Comparison of performances of artificial intelligence versus expert endoscopists for real-time assisted diagnosis of esophageal squamous cell carcinoma (with video).

Gastrointest Endosc 2020 Oct 4;92(4):848-855. Epub 2020 Jun 4.

AI Medical Service Inc, Tokyo, Japan; Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan; Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.

Background And Aims: Narrow-band imaging (NBI) is currently regarded as the standard modality for diagnosing esophageal squamous cell carcinoma (SCC). We developed a computerized image-analysis system for diagnosing esophageal SCC by NBI and estimated its performance with video images.

Methods: Altogether, 23,746 images from 1544 pathologically proven superficial esophageal SCCs and 4587 images from 458 noncancerous and normal tissue were used to construct an artificial intelligence (AI) system. Five- to 9-second video clips from 144 patients captured by NBI or blue-light imaging were used as the validation dataset. These video images were diagnosed by the AI system and 13 board-certified specialists (experts).

Results: The diagnostic process was divided into 2 parts: detection (identify suspicious lesions) and characterization (differentiate cancer from noncancer). The sensitivities, specificities, and accuracies for the detection of SCC were, respectively, 91%, 51%, and 63% for the AI system and 79%, 72%, and 75% for the experts. The sensitivity of the AI system was significantly higher than that of the experts, but its specificity was significantly lower. Sensitivities, specificities, and accuracy for the characterization of SCC were, respectively, 86%, 89%, and 88% for the AI system and 74%, 76%, and 75% for the experts. The receiver operating characteristic curve showed that the AI system had significantly better diagnostic performance than the experts.

Conclusions: Our AI system showed significantly higher sensitivity for detecting SCC and higher accuracy for characterizing SCC from noncancerous tissue than endoscopic experts.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.gie.2020.05.043DOI Listing
October 2020

Artificial intelligence-based diagnostic system classifying gastric cancers and ulcers: comparison between the original and newly developed systems.

Endoscopy 2020 12 5;52(12):1077-1083. Epub 2020 Jun 5.

Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan.

Background: We previously reported for the first time the usefulness of artificial intelligence (AI) systems in detecting gastric cancers. However, the "original convolutional neural network (O-CNN)" employed in the previous study had a relatively low positive predictive value (PPV). Therefore, we aimed to develop an advanced AI-based diagnostic system and evaluate its applicability for the classification of gastric cancers and gastric ulcers.

Methods: We constructed an "advanced CNN" (A-CNN) by adding a new training dataset (4453 gastric ulcer images from 1172 lesions) to the O-CNN, which had been trained using 13 584 gastric cancer and 373 gastric ulcer images. The diagnostic performance of the A-CNN in terms of classifying gastric cancers and ulcers was retrospectively evaluated using an independent validation dataset (739 images from 100 early gastric cancers and 720 images from 120 gastric ulcers) and compared with that of the O-CNN by estimating the overall classification accuracy.

Results: The sensitivity, specificity, and PPV of the A-CNN in classifying gastric cancer at the lesion level were 99.0 % (95 % confidence interval [CI] 94.6 %-100 %), 93.3 % (95 %CI 87.3 %-97.1 %), and 92.5 % (95 %CI 85.8 %-96.7 %), respectively, and for classifying gastric ulcers were 93.3 % (95 %CI 87.3 %-97.1 %), 99.0 % (95 %CI 94.6 %-100 %), and 99.1 % (95 %CI 95.2 %-100 %), respectively. At the lesion level, the overall accuracies of the O- and A-CNN for classifying gastric cancers and gastric ulcers were 45.9 % (gastric cancers 100 %, gastric ulcers 0.8 %) and 95.9 % (gastric cancers 99.0 %, gastric ulcers 93.3 %), respectively.

Conclusion: The newly developed AI-based diagnostic system can effectively classify gastric cancers and gastric ulcers.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1055/a-1194-8771DOI Listing
December 2020

Utilizing artificial intelligence in endoscopy: a clinician's guide.

Expert Rev Gastroenterol Hepatol 2020 Aug 17;14(8):689-706. Epub 2020 Jun 17.

Department of Surgical Oncology, Graduate School of Medicine, the University of Tokyo , Tokyo, Japan.

Introduction: Artificial intelligence (AI) that surpasses human ability in image recognition is expected to be applied in the field of gastrointestinal endoscopes. Accordingly, its research and development (R &D) is being actively conducted. With the development of endoscopic diagnosis, there is a shortage of specialists who can perform high-precision endoscopy. We will examine whether AI with excellent image recognition ability can overcome this problem.

Areas Covered: Since 2016, papers on artificial intelligence using convolutional neural network (CNN in other word Deep Learning) have been published. CNN is generally capable of more accurate detection and classification than conventional machine learning. This is a review of papers using CNN in the gastrointestinal endoscopy area, along with the reasons why AI is required in clinical practice. We divided this review into four parts: stomach, esophagus, large intestine, and capsule endoscope (small intestine).

Expert Opinion: Potential applications for the AI include colorectal polyp detection and differentiation, gastric and esophageal cancer detection, and lesion detection in capsule endoscopy. The accuracy of endoscopic diagnosis will increase if the AI and endoscopist perform the endoscopy together.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1080/17474124.2020.1779058DOI Listing
August 2020

Performance of a computer-aided diagnosis system in diagnosing early gastric cancer using magnifying endoscopy videos with narrow-band imaging (with videos).

Gastrointest Endosc 2020 Oct 15;92(4):856-865.e1. Epub 2020 May 15.

AI Medical Service Inc., Tokyo, Japan; Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan.

Background And Aims: The performance of magnifying endoscopy with narrow-band imaging (ME-NBI) using a computer-aided diagnosis (CAD) system in diagnosing early gastric cancer (EGC) is unclear. Here, we aimed to clarify the differences in the diagnostic performance between expert endoscopists and the CAD system using ME-NBI.

Methods: The CAD system was pretrained using 1492 cancerous and 1078 noncancerous images obtained using ME-NBI. One hundred seventy-four videos (87 cancerous and 87 noncancerous videos) were used to evaluate the diagnostic performance of the CAD system using the area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). For each item, comparisons were made between the CAD system and 11 experts who were skilled in diagnosing EGC using ME-NBI with clinical experience of more than 1 year at our hospital.

Results: The CAD system demonstrated an AUC of 0.8684. The accuracy, sensitivity, specificity, PPV, and NPV were 85.1% (95% confidence interval [95% CI], 79.0-89.6), 87.4% (95% CI, 78.8-92.8), 82.8% (95% CI, 73.5-89.3), 83.5% (95% CI, 74.6-89.7), and 86.7% (95% CI, 77.8-92.4), respectively. The CAD system was significantly more accurate than 2 experts, significantly less accurate than 1 expert, and not significantly different from the remaining 8 experts.

Conclusions: The overall performance of the CAD system using ME-NBI videos in diagnosing EGC was considered good and was equivalent to or better than that of several experts. The CAD system may prove useful in the diagnosis of EGC in clinical practice.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.gie.2020.04.079DOI Listing
October 2020

Automatic detection of various abnormalities in capsule endoscopy videos by a deep learning-based system: a multicenter study.

Gastrointest Endosc 2021 01 15;93(1):165-173.e1. Epub 2020 May 15.

AI Medical Service Inc, Tokyo, Japan; Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan.

Background And Aims: A deep convolutional neural network (CNN) system could be a high-level screening tool for capsule endoscopy (CE) reading but has not been established for targeting various abnormalities. We aimed to develop a CNN-based system and compare it with the existing QuickView mode in terms of their ability to detect various abnormalities.

Methods: We trained a CNN system using 66,028 CE images (44,684 images of abnormalities and 21,344 normal images). The detection rate of the CNN for various abnormalities was assessed per patient, using an independent test set of 379 consecutive small-bowel CE videos from 3 institutions. Mucosal breaks, angioectasia, protruding lesions, and blood content were present in 94, 29, 81, and 23 patients, respectively. The detection capability of the CNN was compared with that of QuickView mode.

Results: The CNN picked up 1,135,104 images (22.5%) from the 5,050,226 test images, and thus, the sampling rate of QuickView mode was set to 23% in this study. In total, the detection rate of the CNN for abnormalities per patient was significantly higher than that of QuickView mode (99% vs 89%, P < .001). The detection rates of the CNN for mucosal breaks, angioectasia, protruding lesions, and blood content were 100% (94 of 94), 97% (28 of 29), 99% (80 of 81), and 100% (23 of 23), respectively, and those of QuickView mode were 91%, 97%, 80%, and 96%, respectively.

Conclusions: We developed and tested a CNN-based detection system for various abnormalities using multicenter CE videos. This system could serve as an alternative high-level screening tool to QuickView mode.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.gie.2020.04.080DOI Listing
January 2021

Application of Convolutional Neural Networks for Detection of Superficial Nonampullary Duodenal Epithelial Tumors in Esophagogastroduodenoscopic Images.

Clin Transl Gastroenterol 2020 03;11(3):e00154

AI Medical Service Inc., Tokyo, Japan.

Objectives: A superficial nonampullary duodenal epithelial tumor (SNADET) is defined as a mucosal or submucosal sporadic tumor of the duodenum that does not arise from the papilla of Vater. SNADETs rarely metastasize to the lymph nodes, and most can be treated endoscopically. However, SNADETs are sometimes missed during esophagogastroduodenoscopic examination. In this study, we constructed a convolutional neural network (CNN) and evaluated its ability to detect SNADETs.

Methods: A deep CNN was pretrained and fine-tuned using a training data set of the endoscopic images of SNADETs (duodenal adenomas [N = 65] and high-grade dysplasias [HGDs] [N = 31] [total 531 images]). The CNN evaluated a separate set of images from 26 adenomas, 8 HGDs, and 681 normal tissue (total 1,080 images). The gold standard for both the training data set and test data set was a "true diagnosis" made by board-certified endoscopists and pathologists. A detected tumor was marked with a rectangular frame on the endoscopic image. If it overlapped at least a part of the "true tumor" diagnosed by board-certified endoscopists, the CNN was considered to have "detected" the SNADET.

Results: The trained CNN detected 94.7% (378 of 399) of SNADETs on an image basis (94% [280 of 298] of adenomas and 100% [101 of 101] of HGDs) and 100% on a tumor basis. The time needed for screening the 399 images containing SNADETs and all 1,080 images (including normal images) was 12 and 31 seconds, respectively.

Discussion: We used a novel algorithm to construct a CNN for detecting SNADETs in a short time.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.14309/ctg.0000000000000154DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7145048PMC
March 2020

Detecting early gastric cancer: Comparison between the diagnostic ability of convolutional neural networks and endoscopists.

Dig Endosc 2021 Jan 2;33(1):141-150. Epub 2020 Jun 2.

AI Medical Service Inc, Tokyo, Japan.

Objectives: Detecting early gastric cancer is difficult, and it may even be overlooked by experienced endoscopists. Recently, artificial intelligence based on deep learning through convolutional neural networks (CNNs) has enabled significant advancements in the field of gastroenterology. However, it remains unclear whether a CNN can outperform endoscopists. In this study, we evaluated whether the performance of a CNN in detecting early gastric cancer is better than that of endoscopists.

Methods: The CNN was constructed using 13,584 endoscopic images from 2639 lesions of gastric cancer. Subsequently, its diagnostic ability was compared to that of 67 endoscopists using an independent test dataset (2940 images from 140 cases).

Results: The average diagnostic time for analyzing 2940 test endoscopic images by the CNN and endoscopists were 45.5 ± 1.8 s and 173.0 ± 66.0 min, respectively. The sensitivity, specificity, and positive and negative predictive values for the CNN were 58.4%, 87.3%, 26.0%, and 96.5%, respectively. These values for the 67 endoscopists were 31.9%, 97.2%, 46.2%, and 94.9%, respectively. The CNN had a significantly higher sensitivity than the endoscopists (by 26.5%; 95% confidence interval, 14.9-32.5%).

Conclusion: The CNN detected more early gastric cancer cases in a shorter time than the endoscopists. The CNN needs further training to achieve higher diagnostic accuracy. However, a diagnostic support tool for gastric cancer using a CNN will be realized in the near future.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/den.13688DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7818187PMC
January 2021

Automated endoscopic detection and classification of colorectal polyps using convolutional neural networks.

Therap Adv Gastroenterol 2020 20;13:1756284820910659. Epub 2020 Mar 20.

Tada Tomohiro institute of Gastroenterology and proctology, Saitama, Japan.

Background: Recently the American Society for Gastrointestinal Endoscopy addressed the 'resect and discard' strategy, determining that accurate differentiation of colorectal polyps (CP) is necessary. Previous studies have suggested a promising application of artificial intelligence (AI), using deep learning in object recognition. Therefore, we aimed to construct an AI system that can accurately detect and classify CP using stored still images during colonoscopy.

Methods: We used a deep convolutional neural network (CNN) architecture called Single Shot MultiBox Detector. We trained the CNN using 16,418 images from 4752 CPs and 4013 images of normal colorectums, and subsequently validated the performance of the trained CNN in 7077 colonoscopy images, including 1172 CP images from 309 various types of CP. Diagnostic speed and yields for the detection and classification of CP were evaluated as a measure of performance of the trained CNN.

Results: The processing time of the CNN was 20 ms per frame. The trained CNN detected 1246 CP with a sensitivity of 92% and a positive predictive value (PPV) of 86%. The sensitivity and PPV were 90% and 83%, respectively, for the white light images, and 97% and 98% for the narrow band images. Among the correctly detected polyps, 83% of the CP were accurately classified through images. Furthermore, 97% of adenomas were precisely identified under the white light imaging.

Conclusions: Our CNN showed promise in being able to detect and classify CP through endoscopic images, highlighting its high potential for future application as an AI-based CP diagnosis support system for colonoscopy.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1177/1756284820910659DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7092386PMC
March 2020

Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network.

Gastrointest Endosc 2020 07 19;92(1):144-151.e1. Epub 2020 Feb 19.

AI Medical Service Inc., Tokyo, Japan; Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan.

Background And Aims: Protruding lesions of the small bowel vary in wireless capsule endoscopy (WCE) images, and their automatic detection may be difficult. We aimed to develop and test a deep learning-based system to automatically detect protruding lesions of various types in WCE images.

Methods: We trained a deep convolutional neural network (CNN), using 30,584 WCE images of protruding lesions from 292 patients. We evaluated CNN performance by calculating the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity, using an independent set of 17,507 test images from 93 patients, including 7507 images of protruding lesions from 73 patients.

Results: The developed CNN analyzed 17,507 images in 530.462 seconds. The AUC for detection of protruding lesions was 0.911 (95% confidence interval [Cl], 0.9069-0.9155). The sensitivity and specificity of the CNN were 90.7% (95% CI, 90.0%-91.4%) and 79.8% (95% CI, 79.0%-80.6%), respectively, at the optimal cut-off value of 0.317 for probability score. In a subgroup analysis of the category of protruding lesions, the sensitivities were 86.5%, 92.0%, 95.8%, 77.0%, and 94.4% for the detection of polyps, nodules, epithelial tumors, submucosal tumors, and venous structures, respectively. In individual patient analyses (n = 73), the detection rate of protruding lesions was 98.6%.

Conclusion: We developed and tested a new computer-aided system based on a CNN to automatically detect various protruding lesions in WCE images. Patient-level analyses with larger cohorts and efforts to achieve better diagnostic performance are necessary in further studies.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.gie.2020.01.054DOI Listing
July 2020

Artificial intelligence-based detection of pharyngeal cancer using convolutional neural networks.

Dig Endosc 2020 Nov 1;32(7):1057-1065. Epub 2020 Apr 1.

AI Medical Service Inc., Tokyo, Japan.

Objectives: The prognosis for pharyngeal cancer is relatively poor. It is usually diagnosed in an advanced stage. Although the recent development of narrow-band imaging (NBI) and increased awareness among endoscopists have enabled detection of superficial pharyngeal cancer, these techniques are still not prevalent worldwide. Nevertheless, artificial intelligence (AI)-based deep learning has led to significant advancements in various medical fields. Here, we demonstrate the diagnostic ability of AI-based detection of pharyngeal cancer from endoscopic images in esophagogastroduodenoscopy.

Methods: We retrospectively collected 5403 training images of pharyngeal cancer from 202 superficial cancers and 45 advanced cancers from the Cancer Institute Hospital, Tokyo, Japan. Using these images, we developed an AI-based diagnostic system with convolutional neural networks. We prepared 1912 validation images from 35 patients with 40 pharyngeal cancers and 40 patients without pharyngeal cancer to evaluate our system.

Results: Our AI-based diagnostic system correctly detected all pharyngeal cancer lesions (40/40) in the patients with cancer, including three small lesions smaller than 10 mm. For each image, the AI-based system correctly detected pharyngeal cancers in images obtained via NBI with a sensitivity of 85.6%, much higher sensitivity than that for images obtained via white light imaging (70.1%). The novel diagnostic system took only 28 s to analyze 1912 validation images.

Conclusions: The novel AI-based diagnostic system detected pharyngeal cancer with high sensitivity. It could facilitate early detection, thereby leading to better prognosis and quality of life for patients with pharyngeal cancers in the near future.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/den.13653DOI Listing
November 2020

[Current status of AI diagnosis in upper gastrointestinal tract].

Nihon Shokakibyo Gakkai Zasshi 2020;117(2):141-149

Tada Tomohiro Institute of Gastroenterology and Proctology.

View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.11405/nisshoshi.117.141DOI Listing
July 2020

Application of artificial intelligence using convolutional neural networks in determining the invasion depth of esophageal squamous cell carcinoma.

Esophagus 2020 07 24;17(3):250-256. Epub 2020 Jan 24.

AI Medical Service Inc., Toshima, Tokyo, Japan.

Objectives: In Japan, endoscopic resection (ER) is often used to treat esophageal squamous cell carcinoma (ESCC) when invasion depths are diagnosed as EP-SM1, whereas ESCC cases deeper than SM2 are treated by surgical operation or chemoradiotherapy. Therefore, it is crucial to determine the invasion depth of ESCC via preoperative endoscopic examination. Recently, rapid progress in the utilization of artificial intelligence (AI) with deep learning in medical fields has been achieved. In this study, we demonstrate the diagnostic ability of AI to measure ESCC invasion depth.

Methods: We retrospectively collected 1751 training images of ESCC at the Cancer Institute Hospital, Japan. We developed an AI-diagnostic system of convolutional neural networks using deep learning techniques with these images. Subsequently, 291 test images were prepared and reviewed by the AI-diagnostic system and 13 board-certified endoscopists to evaluate the diagnostic accuracy.

Results: The AI-diagnostic system detected 95.5% (279/291) of the ESCC in test images in 10 s, analyzed the 279 images and correctly estimated the invasion depth of ESCC with a sensitivity of 84.1% and accuracy of 80.9% in 6 s. The accuracy score of this system exceeded those of 12 out of 13 board-certified endoscopists, and its area under the curve (AUC) was greater than the AUCs of all endoscopists.

Conclusions: The AI-diagnostic system demonstrated a higher diagnostic accuracy for ESCC invasion depth than those of endoscopists and, therefore, can be potentially used in ESCC diagnostics.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10388-020-00716-xDOI Listing
July 2020

Automatic detection of blood content in capsule endoscopy images based on a deep convolutional neural network.

J Gastroenterol Hepatol 2020 Jul 27;35(7):1196-1200. Epub 2019 Dec 27.

AI Medical Service Inc., Tokyo, Japan.

Background And Aim: Detecting blood content in the gastrointestinal tract is one of the crucial applications of capsule endoscopy (CE). The suspected blood indicator (SBI) is a conventional tool used to automatically tag images depicting possible bleeding in the reading system. We aim to develop a deep learning-based system to detect blood content in images and compare its performance with that of the SBI.

Methods: We trained a deep convolutional neural network (CNN) system, using 27 847 CE images (6503 images depicting blood content from 29 patients and 21 344 images of normal mucosa from 12 patients). We assessed its performance by calculating the area under the receiver operating characteristic curve (ROC-AUC) and its sensitivity, specificity, and accuracy, using an independent test set of 10 208 small-bowel images (208 images depicting blood content and 10 000 images of normal mucosa). The performance of the CNN was compared with that of the SBI, in individual image analysis, using the same test set.

Results: The AUC for the detection of blood content was 0.9998. The sensitivity, specificity, and accuracy of the CNN were 96.63%, 99.96%, and 99.89%, respectively, at a cut-off value of 0.5 for the probability score, which were significantly higher than those of the SBI (76.92%, 99.82%, and 99.35%, respectively). The trained CNN required 250 s to evaluate 10 208 test images.

Conclusions: We developed and tested the CNN-based detection system for blood content in CE images. This system has the potential to outperform the SBI system, and the patient-level analyses on larger studies are required.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/jgh.14941DOI Listing
July 2020

Endoscopic detection and differentiation of esophageal lesions using a deep neural network.

Gastrointest Endosc 2020 02 1;91(2):301-309.e1. Epub 2019 Oct 1.

AI Medical Service Inc, Tokyo, Japan; Tada Tomohiro Institute of Gastroenterology and Proctology, Saitama, Japan; Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.

Background And Aims: Diagnosing esophageal squamous cell carcinoma (SCC) depends on individual physician expertise and may be subject to interobserver variability. Therefore, we developed a computerized image-analysis system to detect and differentiate esophageal SCC.

Methods: A total of 9591 nonmagnified endoscopy (non-ME) and 7844 ME images of pathologically confirmed superficial esophageal SCCs and 1692 non-ME and 3435 ME images from noncancerous lesions or normal esophagus were used as training image data. Validation was performed using 255 non-ME white-light images, 268 non-ME narrow-band images/blue-laser images, and 204 ME narrow-band images/blue-laser images from 135 patients. The same validation test data were diagnosed by 15 board-certified specialists (experienced endoscopists).

Results: Regarding diagnosis by non-ME with narrow-band imaging/blue-laser imaging, the sensitivity, specificity, and accuracy were 100%, 63%, and 77%, respectively, for the artificial intelligence (AI) system and 92%, 69%, and 78%, respectively, for the experienced endoscopists. Regarding diagnosis by non-ME with white-light imaging, the sensitivity, specificity, and accuracy were 90%, 76%, and 81%, respectively, for the AI system and 87%, 67%, and 75%, respectively, for the experienced endoscopists. Regarding diagnosis by ME, the sensitivity, specificity, and accuracy were 98%, 56%, and 77%, respectively, for the AI system and 83%, 70%, and 76%, respectively, for the experienced endoscopists. There was no significant difference in the diagnostic performance between the AI system and the experienced endoscopists.

Conclusions: Our AI system showed high sensitivity for detecting SCC by non-ME and high accuracy for differentiating SCC from noncancerous lesions by ME.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.gie.2019.09.034DOI Listing
February 2020

Convolutional Neural Network for Differentiating Gastric Cancer from Gastritis Using Magnified Endoscopy with Narrow Band Imaging.

Dig Dis Sci 2020 05 4;65(5):1355-1363. Epub 2019 Oct 4.

AI Medical Service Inc., Arai Building 2F, 1-10-13 Minami Ikebukuro, Toshima-ku, Tokyo, 171-0022, Japan.

Background: Early detection of early gastric cancer (EGC) allows for less invasive cancer treatment. However, differentiating EGC from gastritis remains challenging. Although magnifying endoscopy with narrow band imaging (ME-NBI) is useful for differentiating EGC from gastritis, this skill takes substantial effort. Since the development of the ability to convolve the image while maintaining the characteristics of the input image (convolution neural network: CNN), allowing the classification of the input image (CNN system), the image recognition ability of CNN has dramatically improved.

Aims: To explore the diagnostic ability of the CNN system with ME-NBI for differentiating between EGC and gastritis.

Methods: A 22-layer CNN system was pre-trained using 1492 EGC and 1078 gastritis images from ME-NBI. A separate test data set (151 EGC and 107 gastritis images based on ME-NBI) was used to evaluate the diagnostic ability [accuracy, sensitivity, positive predictive value (PPV), and negative predictive value (NPV)] of the CNN system.

Results: The accuracy of the CNN system with ME-NBI images was 85.3%, with 220 of the 258 images being correctly diagnosed. The method's sensitivity, specificity, PPV, and NPV were 95.4%, 71.0%, 82.3%, and 91.7%, respectively. Seven of the 151 EGC images were recognized as gastritis, whereas 31 of the 107 gastritis images were recognized as EGC. The overall test speed was 51.83 images/s (0.02 s/image).

Conclusions: The CNN system with ME-NBI can differentiate between EGC and gastritis in a short time with high sensitivity and NPV. Thus, the CNN system may complement current clinical practice of diagnosis with ME-NBI.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s10620-019-05862-6DOI Listing
May 2020

Clinical usefulness of a deep learning-based system as the first screening on small-bowel capsule endoscopy reading.

Dig Endosc 2020 May 2;32(4):585-591. Epub 2019 Oct 2.

Department of Surgical Oncology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.

Background And Aim: To examine whether our convolutional neural network (CNN) system based on deep learning can reduce the reading time of endoscopists without oversight of abnormalities in the capsule-endoscopy reading process.

Methods: Twenty videos of the entire small-bowel capsule endoscopy procedure were prepared, each of which included 0-5 lesions of small-bowel mucosal breaks (erosions or ulcerations). At another institute, two reading processes were compared: (A) endoscopist-alone readings and (B) endoscopist readings after the first screening by the proposed CNN. In process B, endoscopists read only images detected by CNN. Two experts and four trainees independently read 20 videos each (10 for process A and 10 for process B). Outcomes were reading time and detection rate of mucosal breaks by endoscopists. Gold standard was findings at the original institute by two experts.

Results: Mean reading time of small-bowel sections by endoscopists was significantly shorter during process B (expert, 3.1 min; trainee, 5.2 min) compared to process A (expert, 12.2 min; trainee, 20.7 min) (P < 0.001). For 37 mucosal breaks, detection rate by endoscopists did not significantly decrease in process B (expert, 87%; trainee, 55%) compared to process A (expert, 84%; trainee, 47%). Experts detected all eight large lesions (>5 mm), but trainees could not, even when supported by the CNN.

Conclusions: Our CNN-based system for capsule endoscopy videos reduced the reading time of endoscopists without decreasing the detection rate of mucosal breaks. However, the reading level of endoscopists should be considered when using the system.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1111/den.13517DOI Listing
May 2020

A novel approach to the selection of an appropriate pacing position for optimal cardiac resynchronization therapy using CT coronary venography and myocardial perfusion imaging: FIVE STaR method (fusion image using CT coronary venography and perfusion SPECT applied for cardiac resynchronization therapy).

J Nucl Cardiol 2021 Aug 21;28(4):1438-1445. Epub 2019 Aug 21.

Department of Cardiology, Hakodate Goryoukaku Hospital, 38-3 Goryoukaku, Hakodate, Hokkaido, 040-8611, Japan.

Background: Nearly one-third of patients with advanced heart failure (HF) do not benefit from cardiac resynchronization therapy (CRT). We developed a novel approach for optimizing CRT via a simultaneous assessment of the myocardial viability and an appropriate lead position using a fusion technique with CT coronary venography and myocardial perfusion imaging.

Methods And Results: The myocardial viability and coronary venous anatomy were evaluated by resting Tc-99m-tetrofosmin myocardial perfusion imaging (MPI) and contrast CT venography, respectively. Using fusion images reconstructed by MPI and CT coronary venography, the pacing site and lead length were determined for appropriate CRT device implantations in 4 HF patients. The efficacy of this method was estimated by the symptomatic and echocardiographic functional parameters. In all patients, fusion images using MPI and CT coronary venograms were successfully reconstructed without any misregistration and contributed to an effective CRT. Before the surgery, this method enabled the operators to precisely identify the optimal indwelling site, which exhibited myocardial viability and had a lead length necessary for an appropriate device implantation.

Conclusions: The fusion image technique using myocardial perfusion imaging and CT coronary venography is clinically feasible and promising for CRT optimization and enhancing the patient safety in patients with advanced HF.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1007/s12350-019-01856-zDOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8421301PMC
August 2021
-->