• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 19
  • 7
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 141
  • 141
  • 32
  • 27
  • 26
  • 21
  • 19
  • 17
  • 16
  • 15
  • 15
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Rozpoznávání obrazů pro ovládání robotické ruky / Image recognition for robotic hand

Labudová, Kristýna January 2017 (has links)
This thesis concerns with processing of embedded terminals’ images and their classification. There is problematics of moire noise reduction thought filtration in frequency domain and the image normalization for further processing analyzed. Keypoints detectors and descriptors are used for image classification. Detectors FAST and Harris corner detector and descriptors SURF, BRIEF and BRISK are emphasized as well as their evaluation in terms of potential contribution to this work.
132

Object Detection with Deep Convolutional Neural Networks in Images with Various Lighting Conditions and Limited Resolution / Detektion av objekt med Convolutional Neural Networks (CNN) i bilder med dåliga belysningförhållanden och lågupplösning

Landin, Roman January 2021 (has links)
Computer vision is a key component of any autonomous system. Real world computer vision applications rely on a proper and accurate detection and classification of objects. A detection algorithm that doesn’t guarantee reasonable detection accuracy is not applicable in real time scenarios where safety is the main objective. Factors that impact detection accuracy are illumination conditions and image resolution. Both contribute to degradation of objects and lead to low classifications and detection accuracy. Recent development of Convolutional Neural Networks (CNNs) based algorithms offers possibilities for low-light (LL) image enhancement and super resolution (SR) image generation which makes it possible to combine such models in order to improve image quality and increase detection accuracy. This thesis evaluates different CNNs models for SR generation and LL enhancement by comparing generated images against ground truth images. To quantify the impact of the respective model on detection accuracy, a detection procedure was evaluated on generated images. Experimental results evaluated on images selected from NoghtOwls and Caltech Pedestrian datasets proved that super resolution image generation and low-light image enhancement improve detection accuracy by a substantial margin. Additionally, it has been proven that a cascade of SR generation and LL enhancement further boosts detection accuracy. However, the main drawback of such cascades is related to an increased computational time which limits possibilities for a range of real time applications. / Datorseende är en nyckelkomponent i alla autonoma system. Applikationer för datorseende i realtid är beroende av en korrekt detektering och klassificering av objekt. En detekteringsalgoritm som inte kan garantera rimlig noggrannhet är inte tillämpningsbar i realtidsscenarier, där huvudmålet är säkerhet. Faktorer som påverkar detekteringsnoggrannheten är belysningförhållanden och bildupplösning. Dessa bidrar till degradering av objekt och leder till låg klassificerings- och detekteringsnoggrannhet. Senaste utvecklingar av Convolutional Neural Networks (CNNs) -baserade algoritmer erbjuder möjligheter för förbättring av bilder med dålig belysning och bildgenerering med superupplösning vilket gör det möjligt att kombinera sådana modeller för att förbättra bildkvaliteten och öka detekteringsnoggrannheten. I denna uppsats utvärderas olika CNN-modeller för superupplösning och förbättring av bilder med dålig belysning genom att jämföra genererade bilder med det faktiska data. För att kvantifiera inverkan av respektive modell på detektionsnoggrannhet utvärderades en detekteringsprocedur på genererade bilder. Experimentella resultat utvärderades på bilder utvalda från NoghtOwls och Caltech datauppsättningar för fotgängare och visade att bildgenerering med superupplösning och bildförbättring i svagt ljus förbättrar noggrannheten med en betydande marginal. Dessutom har det bevisats att en kaskad av superupplösning-generering och förbättring av bilder med dålig belysning ytterligare ökar noggrannheten. Den största nackdelen med sådana kaskader är relaterad till en ökad beräkningstid som begränsar möjligheterna för en rad realtidsapplikationer.
133

Machine Learning and Deep Learning Approaches to Print defect Detection, Face Set Recognition, Face Alignment, and Visual Enhancement in Space and Time

Xiaoyu Xiang (11166546) 21 July 2021 (has links)
<div>The research includes machine Learning and Deep Learning Approaches to Print Defect Detection, Face Set Recognition and Face Alignment, and Visual-Enhancement in Space and Time. This thesis consists of six parts which are related to 6 projects:</div><div><br></div><div>In Chapter 1, the first project focuses on detection of local printing defects including gray spots and solid spots. We propose a coarse-to-fine method to detect local defects in a block-wise manner and aggregate the blockwise attributes to generate the feature vector of the whole test page for a further ranking task. In the detection part, we first select candidate regions by thresholding a single feature. Then more detailed features of candidate blocks are calculated and sent to a decision tree that is previously trained on our training dataset. The final result is given by the decision tree model to control the false alarm rate while maintaining the required miss rate.</div><div><br></div><div>Chapter 2 introduces face set recognition and Chapter 3 is about face alignment. In order to reduce the computational complexity of comparing face sets, we propose a deep neural network that can compute and aggregate the face feature vectors with different weights. As for face alignment, our goal is to solve the jittering of landmark locations when applied on video. We propose metrics and corresponding methods around this goal.</div><div><br></div><div>In recent years, mobile photography has become increasingly prevalent in our lives with social media due to its high portability and convenience. However, many challenges still exist in distributing high-quality mobile images and videos under the limit of data capacity, hardware storage, and network bandwidth. Therefore, we have been exploring enhancement techniques to improve the image and video qualities, considering both effectiveness and efficiency for a wide variety of applications, including WhatsApp, Portal, TikTok, even the printing industry. Chapter 4 introduces single image super-resolution to handle real-world images with various degradations, and its influence on several downstream high-level computer vision tasks. Next, Chapter 5 studies on headshot image restoration with multiple references, which is an application of visual enhancement under more specific scenarios. Finally, as a step towards the temporal domain enhancement, the Zooming SlowMo framework for fast and accurate space-time video super-resolution will be introduced in Chapter 6.</div>
134

An in-vitro comparison of working length determination between a digital system and conventional film when source-film/sensor distance and exposure time are modified

Ley, Paul J. (Joseph), 1980- January 2009 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Accurate determination of working length during endodontic therapy is a crucial step in achieving a predictable outcome. This is determined by the use of electronic apex locators, tactile perception, and knowledge of average tooth lengths and/or dental radiography whether digital or conventional is utilized. It is the aim of this study to determine if there is a difference between Schick digital radiography and Kodak Insight conventional film in accurately determining working lengths when modifying exposure time and source-film/sensor distance. Twelve teeth with size 15 K-flex files at varying known lengths from the anatomical apex were mounted in a resin-plaster mix to simulate bone density. Each tooth was radiographed while varying the source-film/sensor distance and exposure 122 time. Four dental professionals examined the images and films independently. Ten images and 10 films were selected at random and re-examined to determine each examiner?s repeatability. The error in working length was calculated as the observed value minus the known working length for each tooth type. A mixed-effects, full-factorial analysis of variance (ANOVA) model was used to model the error in working length. Included in the ANOVA model were fixed effects for type of image, distance, exposure time, and all two-way and three-way interactions. The repeatability of each examiner for each film type was assessed by estimating the intra-class correlation coefficient (ICC). The repeatability of each examiner on digital film was good with ICCs ranging from 0.67 to 1.0. Repeatability on the conventional film was poor with ICCs varying from -0.29 to 0.55.We found there was an overall difference between the conventional and digital films (p < 0.001). After adjusting for the effects of distance and exposure time, the error in the working length from the digital image was 0.1 mm shorter (95% CI: 0.06, 0.14) than the error in the working length from the film image. There was no difference among distances (p = 0.999) nor exposure time (p = 0.158) for film or images. Based on the results of our study we conclude that although there is a statistically significant difference, there is no clinically significant difference between digital radiography and conventional film when exposure time and source-film/sensor distance are adjusted.
135

An in vitro comparison of working length accuracy between a digital system and conventional film when vertical angulation of the object is variable

Christensen, Shane R. (Robert), 1977- January 2009 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Accurate determination of working length during endodontic therapy is critical in achieving a predictable and successful outcome. Working length is determined by the use of electronic apex locators, tactile perception, knowledge of average tooth lengths and dental radiography. Due to the increasing use of digital radiography in clinical practice, a comparison with conventional film in working length determination is justified. The purpose of this study is to determine if there is a difference between Schick digital radiography and Kodak Ultra-speed film in the accurate determination of working lengths when vertical angulation of the object is variable. Twelve teeth with #15 K-flex files at varying known lengths from the anatomical apex were mounted in a resin-plaster mix to simulate bone density. A mounting jig for the standardization of projection geometries allowed for exact changes in vertical angulation as it related to the object (tooth) and the film/sensor. Each tooth was imaged using Schick CDR and Kodak Ultra-speed film at varying angles with a consistent source-film distance and exposure time. Four dental professionals examined the images and films independently and measured the distance from the tip of the file to radiographic apex and recorded their results. The error in working length was calculated as the observed value minus the known working length for each tooth type. A mixed-effects, full-factorial analysis of variance (ANOVA) model was used to model the error in working length. Included in the ANOVA model were fixed effects for type of image, vertical angulation, and the interaction of angle and film type. Tooth type and examiner were included in the model as random effects assuming a compound symmetry covariance structure. The repeatability of each examiner, for each film type, was assessed by estimating the intra-class correlation coefficient (ICC). The ICC was determined when 12 randomly selected images and radiographs were reevaluated 10 days after initial measurements. The repeatability of each examiner for Schick CDR was good with ICCs ranging from 0.67 to 1.0. Repeatability for the conventional film was poor with ICCs varying from -0.29 to 0.55. We found the error in the working length was not significantly different between film types (p = 0.402). After adjusting for angle, we found that error in the working length from the digital image was only 0.02 mm greater (95-percent CI: -0.03, 0.06) than the conventional film. Furthermore, there was not a significant difference among the angles (p = 0.246) nor in the interaction of image type with angle (p = 0.149). Based on the results of our study, we conclude that there is not a statistically significant difference in determining working length between Schick CDR and Kodak Ektaspeed film when vertical angulation is modified.
136

Three Stage Level Set Segmentation of Mass Core, Periphery, and Spiculations for Automated Image Analysis of Digital Mammograms

Ball, John E 05 May 2007 (has links)
In this dissertation, level set methods are employed to segment masses in digital mammographic images and to classify land cover classes in hyperspectral data. For the mammography computer aided diagnosis (CAD) application, level set-based segmentation methods are designed and validated for mass periphery segmentation, spiculation segmentation, and core segmentation. The proposed periphery segmentation uses the narrowband level set method in conjunction with an adaptive speed function based on a measure of the boundary complexity in the polar domain. The boundary complexity term is shown to be beneficial for delineating challenging masses with ill-defined and irregularly shaped borders. The proposed method is shown to outperform periphery segmentation methods currently reported in the literature. The proposed mass spiculation segmentation uses a generalized form of the Dixon and Taylor Line Operator along with narrowband level sets using a customized speed function. The resulting spiculation features are shown to be very beneficial for classifying the mass as benign or malignant. For example, when using patient age and texture features combined with a maximum likelihood (ML) classifier, the spiculation segmentation method increases the overall accuracy to 92% with 2 false negatives as compared to 87% with 4 false negatives when using periphery segmentation approaches. The proposed mass core segmentation uses the Chan-Vese level set method with a minimal variance criterion. The resulting core features are shown to be effective and comparable to periphery features, and are shown to reduce the number of false negatives in some cases. Most mammographic CAD systems use only a periphery segmentation, so those systems could potentially benefit from core features.
137

Robust Noise Filtering techniques for improving the Quality of SODISM images using Imaging and Machine Learning

Algamudi, Abdulrazag A.M. January 2020 (has links)
Life on Earth is strongly related to the Sun, which makes it a vital star to study and understand. To improve our knowledge of the way the Sun works, many satellites have been launched into space to monitor the Sun‟s activities where the one of main focus is the effect of these activities on the Earth‟s climate; PICARD is one such satellite. Due to the noise associated with SODISM images, the clarity of these images and the appearance of solar features are affected. Image denoising and enhancement are the main techniques to improve the visual appearance of SODISM images. Affective de-noising algorithm methods depend on a proper detecting of noise present in the image. The aim is to identify which type of noise is present in the image. To reach this point, supervised machine-learning (ML) classifier is used to classify the type of noise present in the image. Furthermore, this work introduces a novel technique developed to enhance the quality of SODISM images. In this thesis, the Modified Undecimated Discrete Wavelet Transform (M-UDWT) technique is used to de-noise and enhance the quality of SODISM images. The proposed method is robust and effectively improves the quality of SODISM images, and produces more precise information and clear feature are brought out. In addition, the non wavelet enhancement is developed as well in this thesis. The results of this algorithm is discussed. The new methods are also assessed using two different methods: subjective (by human observation) and objective (by calculation)
138

Avaliação de alterações volumétricas, metabólicas e atividades funcionais na Doença de Alzheimer, no comprometimento cognitivo leve e no envelhecimento normal / Evaluation of volumetric changes, metabolic, and functional activities in Alzheimer\'s disease, in mild cognitive impairment and in the normal aging

Tíbor Rilho Perroco 06 February 2014 (has links)
O presente estudo consistiu-se na avaliação clínica e aplicação de testes cognitivos, além da realização de ressonância magnética (RM), de 3 tesla, do cérebro, processada com a técnica de \"Voxel-based Morphometry\" (VBM) e \"Skull Strip\", e 18F-FDG PET -CT processado com \"Statistical Parametric Mapping\" (SPM8) e correção de volume parcial (PVELab), em idosos sem déficits cognitivos (CDR=0), com comprometimento cognitivo leve amnéstico (CCL) (CDR=0,5) e com Doença de Alzheimer leve (DA leve)(CDR de 0,5 a 1). Os objetivos foram comparar os padrões de neuroimagem estrutural e metabólica entre os grupos, assim como correlacionar alterações estruturais volumétricas da RM e alterações metabólicas cerebrais do PET-CT, a um teste funcional, o \"Informant Questionnaire on Cognitive Decline in the Elderly\" (IQCODE), nessa mesma amostra. Cada um dos grupo 3 grupos, pareados por idade, contém 30 indivíduos, totalizando amostra de 90. Os resultados dos exames de Neuroimagens, divididos por comparações entre os grupos, e corrigidos pela escolaridade, foram considerados significativos todos os achados nos quais a significância corrigida for <= 0,05 (p-FWEcorr <= 0,05). No CCL x DA foi observado hipometabolismo Giro do Cíngulo à Direita. No grupo DA x CCL foram observados hipometabolismos no Giro do Cíngulo à Esquerda, no Precuneus Esquerdo, Precuneus Direito e na parte inferior do Lobo Parietal Esquerdo. Na DA x Controle, utilizando-se pesquisa de área a priori e filtros gaussiano de 8mm e 4mm, foi observada redução estatisticamente significante quanto ao volume de substância cinzenta na Amígdala Esquerda e na Amígdala Direita. No PET - CT, da DA, em relação ao grupo controle foram observadas áreas de hipometabolismos no Giro do Cíngulo à Esquerda, no Precuneus Direito e no Giro Temporal Medial Direito. Na correlação direta do IQCODE, na comparação DA x Controle, no PET - CT evidenciou-se hipometabolismo no Giro Fusiforme Direito. Em conclusão, os resultados das comparações entre os grupos foram semelhantes ao encontrado na literatura para fases iniciais (leves) da patologia e mostraram, ainda, uma tendência a um \"continuum\" do controle até a DA. Por outro lado à correlação do IQCODE no DA x Controle carece de comprovação por outros trabalhos e com outros constructos estatísticos / This study consisted in the clinical evaluation and application of cognitive tests, in addition to magnetic resonance imaging (MRI) of 3 Tesla, of brain, processed with the technique of \"Voxel-based Morphometry\" (VBM) and \"Skull Strip\", and 18F-FDG PET-CT processed by \"Statistical Parametric Mapping\" (SPM8) and partial volume correction (PVELab) in subjects without cognitive impairment (CDR = 0), with amnestic mild cognitive impairment (MCI)(CDR = 0.5) and with mild Alzheimer \'s disease (AD mild)(CDRs of 0.5 to 1). The objectives were to compare the patterns of structural and metabolic neuroimaging between groups, as well as correlate MRI\'s volumetric structural changes and PET-CT\'s metabolic brain with a functional test, the \"Informant Questionnaire on Cognitive Decline in the Elderly\" (IQCODE) in this same sample. Each one of three groups, matched by age, contains 30 subjects, totaling 90. The test results of neuroimaging, divided by comparisons between groups, and corrected by education, were considered significant the findings that corrected significance is <= 0.05 (p-FWEcorr <= 0.05). In CCL x DA was observed hypometabolism right cingulate gyrus. In DA x CCL hypometabolism were observed in the left cingulate gyrus, the left precuneus, right precuneus and left inferior parietal lobe. In DA x Control, using the \"a priori\" research area and gaussian filters 8mm and 4mm was observed statistically significant reduction on the volume of gray matter in the left and right amygdala. In PET - CT of DA relative to control group were observed areas of hypometabolisms in left cingulate, right precuneus and in the right medial temporal gyrus. In direct correlation of the IQCODE, compared DA x Control on PET - CT revealed a hypometabolism in the right fusiform gyrus. In conclusion, the results of the comparisons between groups were similar to those found in the literature for early (mild) pathology and showed a \"continuum\" of control to the DA. On the other hand the correlation of the IQCODE in DA x Control lacks confirmation by other studies and other statistical constructs
139

Avaliação de alterações volumétricas, metabólicas e atividades funcionais na Doença de Alzheimer, no comprometimento cognitivo leve e no envelhecimento normal / Evaluation of volumetric changes, metabolic, and functional activities in Alzheimer\'s disease, in mild cognitive impairment and in the normal aging

Perroco, Tíbor Rilho 06 February 2014 (has links)
O presente estudo consistiu-se na avaliação clínica e aplicação de testes cognitivos, além da realização de ressonância magnética (RM), de 3 tesla, do cérebro, processada com a técnica de \"Voxel-based Morphometry\" (VBM) e \"Skull Strip\", e 18F-FDG PET -CT processado com \"Statistical Parametric Mapping\" (SPM8) e correção de volume parcial (PVELab), em idosos sem déficits cognitivos (CDR=0), com comprometimento cognitivo leve amnéstico (CCL) (CDR=0,5) e com Doença de Alzheimer leve (DA leve)(CDR de 0,5 a 1). Os objetivos foram comparar os padrões de neuroimagem estrutural e metabólica entre os grupos, assim como correlacionar alterações estruturais volumétricas da RM e alterações metabólicas cerebrais do PET-CT, a um teste funcional, o \"Informant Questionnaire on Cognitive Decline in the Elderly\" (IQCODE), nessa mesma amostra. Cada um dos grupo 3 grupos, pareados por idade, contém 30 indivíduos, totalizando amostra de 90. Os resultados dos exames de Neuroimagens, divididos por comparações entre os grupos, e corrigidos pela escolaridade, foram considerados significativos todos os achados nos quais a significância corrigida for <= 0,05 (p-FWEcorr <= 0,05). No CCL x DA foi observado hipometabolismo Giro do Cíngulo à Direita. No grupo DA x CCL foram observados hipometabolismos no Giro do Cíngulo à Esquerda, no Precuneus Esquerdo, Precuneus Direito e na parte inferior do Lobo Parietal Esquerdo. Na DA x Controle, utilizando-se pesquisa de área a priori e filtros gaussiano de 8mm e 4mm, foi observada redução estatisticamente significante quanto ao volume de substância cinzenta na Amígdala Esquerda e na Amígdala Direita. No PET - CT, da DA, em relação ao grupo controle foram observadas áreas de hipometabolismos no Giro do Cíngulo à Esquerda, no Precuneus Direito e no Giro Temporal Medial Direito. Na correlação direta do IQCODE, na comparação DA x Controle, no PET - CT evidenciou-se hipometabolismo no Giro Fusiforme Direito. Em conclusão, os resultados das comparações entre os grupos foram semelhantes ao encontrado na literatura para fases iniciais (leves) da patologia e mostraram, ainda, uma tendência a um \"continuum\" do controle até a DA. Por outro lado à correlação do IQCODE no DA x Controle carece de comprovação por outros trabalhos e com outros constructos estatísticos / This study consisted in the clinical evaluation and application of cognitive tests, in addition to magnetic resonance imaging (MRI) of 3 Tesla, of brain, processed with the technique of \"Voxel-based Morphometry\" (VBM) and \"Skull Strip\", and 18F-FDG PET-CT processed by \"Statistical Parametric Mapping\" (SPM8) and partial volume correction (PVELab) in subjects without cognitive impairment (CDR = 0), with amnestic mild cognitive impairment (MCI)(CDR = 0.5) and with mild Alzheimer \'s disease (AD mild)(CDRs of 0.5 to 1). The objectives were to compare the patterns of structural and metabolic neuroimaging between groups, as well as correlate MRI\'s volumetric structural changes and PET-CT\'s metabolic brain with a functional test, the \"Informant Questionnaire on Cognitive Decline in the Elderly\" (IQCODE) in this same sample. Each one of three groups, matched by age, contains 30 subjects, totaling 90. The test results of neuroimaging, divided by comparisons between groups, and corrected by education, were considered significant the findings that corrected significance is <= 0.05 (p-FWEcorr <= 0.05). In CCL x DA was observed hypometabolism right cingulate gyrus. In DA x CCL hypometabolism were observed in the left cingulate gyrus, the left precuneus, right precuneus and left inferior parietal lobe. In DA x Control, using the \"a priori\" research area and gaussian filters 8mm and 4mm was observed statistically significant reduction on the volume of gray matter in the left and right amygdala. In PET - CT of DA relative to control group were observed areas of hypometabolisms in left cingulate, right precuneus and in the right medial temporal gyrus. In direct correlation of the IQCODE, compared DA x Control on PET - CT revealed a hypometabolism in the right fusiform gyrus. In conclusion, the results of the comparisons between groups were similar to those found in the literature for early (mild) pathology and showed a \"continuum\" of control to the DA. On the other hand the correlation of the IQCODE in DA x Control lacks confirmation by other studies and other statistical constructs
140

Multiscale and meta-analytic approaches to inference in clinical healthcare data

Hamilton, Erin Kinzel 29 March 2013 (has links)
The field of medicine is regularly faced with the challenge of utilizing information that is complicated or difficult to characterize. Physicians often must use their best judgment in reaching decisions or recommendations for treatment in the clinical setting. The goal of this thesis is to use innovative statistical tools in tackling three specific challenges of this nature from current healthcare applications. The first aim focuses on developing a novel approach to meta-analysis when combining binary data from multiple studies of paired design, particularly in cases of high heterogeneity between studies. The challenge is in properly accounting for heterogeneity when dealing with a low or moderate number of studies, and with a rarely occurring outcome. The proposed approach uses a Rasch model for translating data from multiple paired studies into a unified structure that allows for properly handling variability associated with both pair effects and study effects. Analysis is then performed using a Bayesian hierarchical structure, which accounts for heterogeneity in a direct way within the variances of the separate generating distributions for each model parameter. This approach is applied to the debated topic within the dental community of the comparative effectiveness of materials used for pit-and-fissure sealants. The second and third aims of this research both have applications in early detection of breast cancer. The interpretation of a mammogram is often difficult since signs of early disease are often minuscule, and the appearance of even normal tissue can be highly variable and complex. Physicians often have to consider many important pieces of the whole picture when trying to assess next steps. The final two aims focus on improving the interpretation of findings in mammograms to aid in early cancer detection. When dealing with high frequency and irregular data, as is seen in most medical images, the behaviors of these complex structures are often difficult or impossible to quantify by standard modeling techniques. But a commonly occurring phenomenon in high-frequency data is that of regular scaling. The second aim in this thesis is to develop and evaluate a wavelet-based scaling estimator that reduces the information in a mammogram down to an informative and low-dimensional quantification of the innate scaling behavior, optimized for use in classifying the tissue as cancerous or non-cancerous. The specific demands for this estimator are that it be robust with respect to distributional assumptions on the data, and with respect to outlier levels in the frequency domain representation of the data. The final aim in this research focuses on enhancing the visualization of microcalcifications that are too small to capture well on screening mammograms. Using scale-mixing discrete wavelet transform methods, the existing detail information contained in a very small and course image will be used to impute scaled details at finer levels. These "informed" finer details will then be used to produce an image of much higher resolution than the original, improving the visualization of the object. The goal is to also produce a confidence area for the true location of the shape's borders, allowing for more accurate feature assessment. Through the more accurate assessment of these very small shapes, physicians may be more confident in deciding next steps.

Page generated in 0.1341 seconds