碩士 / 國立雲林科技大學 / 通訊工程研究所碩士班 / 97 / The study was divided into two parts:the first was fractal analysis of colonoscopy image for clinical metabolic syndrome, the second was development of real-time navigation system.
Fractal analysis of retinal vasculature was reported to have significant correlation with diabetic retinopathy. Likewise, colonic submucosal vasculature can also be clearly observed on colonoscopy. To establish and evaluate an image classifier for clinical metabolic syndrome, we analyzed the fractal dimension (FD) and texture characteristics of colonic vascular networks observed on sedation colonoscopy. We retrospectively selected clear colonoscopic images from 120 clinical metabolic syndrome subjects and 90 healthy control subjects in an image database of a health examination center. Regions of interest (ROI) containing vascular networks of major vascular trunk and primary branches of mucosal were adopted. White light reflection points were detected and eliminated by the inpainting method [10, 11]. The fractal dimension of an ROI was calculated with the shifting differential box counting algorithm [14]. In addition, additional six statistical image features were combined with fractal dimensions in the support vector machine for better classification. Our results revealed that the ROIs obtained from metabolic syndrome subjects had smaller FD than that from healthy control subjects in either R, G, B, or vessel component. According to our knowledge, our study is the first one in applying fractal analysis of colonoscopic vascular networks as a novel image classifier for clinical metabolic syndrome. Combine the fractal dimension with six texture features result in high discriminating rates in differentiating clinical metabolic syndrome from healthy control. The accuracy could reach 91.14% to 97.71% in ROIs of R、G and B image with mucosal and vessel networks. And the accuracy could reach 93.21% to 94.05% in ROIs of binary vessel networks images.
We aim to utilize computer algorithms derived from physicians’ clinical experience to propose a novel endoscopy navigation system using hierarchical image analysis algorithms to guide a directional insertion. Consecutive colonoscopic images were transformed from RGB into HSV color space to classify as very bright, very dark, double-exposure and clear colon images. Only clear colon images were adopted for further analysis. To guide the direction of the lumen, we developed two algorithms: the darkness detection algorithm (DDA) and the fold detection algorithm (FDA). Because of limited illumination, the distant lumen tends to appear dark in the endoscopic view. The DDA is therefore based on the fact that the lumen is always darker than the nearby mucosa. We define navigation arrows pointing at the darkest field in the DDA. Meanwhile, from physicians’ clinical experiences, directional insertion into the lumen tends to orient perpendicularly to the circular folds of colons. So we utilize the Canny edge detection method to detect the circular folds and subsequently define the navigation arrows as the perpendicular arrows halfway within the circular folds pointing inwards. The system navigates in a hierarchical fashion starting from the DDA to the FDA. The accuracy of this proposed system was then evaluated by another endoscopist interpreter on all analyzed clear images in a blinded and random fashion. In the DDA, we calculate the angles between interpreter’s and navigation’s arrows. Angles less than 45 degrees were regarded as correct. In the FDA, we calculate the accuracy of the FDA as the percentage of correct arrows among total arrows. Total of 12756 still frames of colonoscopy images were captured from the colonoscopy video (30 frame/sec). The percentages of very bright, very dark, double-exposure, and clear colon images from the colonoscopy video (426 seconds) are 1.39% (177/12756), 8.35% (1065/12756), 5.99% (764/12756), and 84.27% (10750/12756), respectively. Total of 12756 still frames of colonoscopy images were classified to six sets which are subsampled frames per every six seconds. Then the tested set was selected randomly from the six sets. The percentages of very bright, very dark, double-exposure and clear colon images from the tested set (71 seconds) are 1.55% (33/2126), 8.04% (171/2126), 4.99% (106/2126), and 85.42% (1816/2126), respectively. In clear images of the tested set, 38.38% (697/1816) and 37.00% (672/1816) of images were analyzed by the DDA and the FDA, respectively. Average accuracy reached 97.56% (680/697) and 98.04% (2153/2196) in the DDA and FDA, respectively. It takes less than one second in average for processing each frame.
Identifer | oai:union.ndltd.org:TW/097YUNT5650013 |
Date | January 2009 |
Creators | Syu-Jyun Peng, 彭徐鈞 |
Contributors | Hsuan-Ting Chang, 張軒庭 |
Source Sets | National Digital Library of Theses and Dissertations in Taiwan |
Language | zh-TW |
Detected Language | English |
Type | 學位論文 ; thesis |
Format | 161 |
Page generated in 0.0135 seconds