• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 277
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 592
  • 592
  • 153
  • 116
  • 110
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 73
  • 70
  • 70
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Three Stage Level Set Segmentation of Mass Core, Periphery, and Spiculations for Automated Image Analysis of Digital Mammograms

Ball, John E 05 May 2007 (has links)
In this dissertation, level set methods are employed to segment masses in digital mammographic images and to classify land cover classes in hyperspectral data. For the mammography computer aided diagnosis (CAD) application, level set-based segmentation methods are designed and validated for mass periphery segmentation, spiculation segmentation, and core segmentation. The proposed periphery segmentation uses the narrowband level set method in conjunction with an adaptive speed function based on a measure of the boundary complexity in the polar domain. The boundary complexity term is shown to be beneficial for delineating challenging masses with ill-defined and irregularly shaped borders. The proposed method is shown to outperform periphery segmentation methods currently reported in the literature. The proposed mass spiculation segmentation uses a generalized form of the Dixon and Taylor Line Operator along with narrowband level sets using a customized speed function. The resulting spiculation features are shown to be very beneficial for classifying the mass as benign or malignant. For example, when using patient age and texture features combined with a maximum likelihood (ML) classifier, the spiculation segmentation method increases the overall accuracy to 92% with 2 false negatives as compared to 87% with 4 false negatives when using periphery segmentation approaches. The proposed mass core segmentation uses the Chan-Vese level set method with a minimal variance criterion. The resulting core features are shown to be effective and comparable to periphery features, and are shown to reduce the number of false negatives in some cases. Most mammographic CAD systems use only a periphery segmentation, so those systems could potentially benefit from core features.
252

Совершенствование подхода к сегментации кровеносных сосудов сетчатки с применением нейронных сетей : магистерская диссертация / Improving the approach to retinal blood vessel segmentation using neural networks

Мурас, Д. К., Muras, D. K. January 2024 (has links)
This study presents the development and evaluation process of an improved CG-ResUnet neural network model for retinal blood vessel segmentation. The methodology includes preprocessing techniques such as CLAHE, Kirsch and grey filtering to improve image quality. The developed model showed the highest precision (0.961), but it also showed the lowest area under the curve (AUC) (0.919). The lowest recall (0.872) indicates that the model still has potential for improvement in minimising false results and accurately identifying vessel pixels. The precision (accuracy) of the model (0.631) is higher than other models, indicating that this model is highly sensitive. However, additional tuning is required to achieve higher accuracy and overall segmentation quality. F1-Score (0.729) and Dice score (0.729) were also higher than other models, indicating high potential for growth with further tuning. A hybrid post-processing approach combining automatic segmentation with manual adjustments is proposed to improve segmentation accuracy, especially for complex images with thin vessels. Future research should focus on improving accuracy and solving segmentation problems in areas of high complexity to further improve diagnostic efficiency and reduce manual labor in clinical settings. / В данном исследовании представлен процесс разработки и оценки усовершенствованной нейросетевой модели CG-ResUnet для сегментации кровеносных сосудов сетчатки. Методология включает в себя такие методы предварительной обработки, как CLAHE, Кирша и серая фильтрация для улучшения качества изображения. Разработанная модель показала самый высокий показатель точности (0,961), однако она также продемонстрировала самый низкий показатель площади под кривой (AUC) (0,919). Самый низкий показатель recall (0,872) указывает на то, что модель все еще имеет потенциал для улучшения в минимизации ложных результатов и точном определении пикселей сосудов. Точность (precision) модели (0,631) превышает показатели других моделей, что указывает на высокую чувствительность данной модели. Однако для достижения более высокой точности и общего качества сегментации требуется дополнительная настройка. Показатели F1-Score (0,729) и Dice score (0,729) также оказались выше, чем у других моделей, что свидетельствует о высоком потенциале для роста при последующей настройке. Для повышения точности сегментации, особенно для сложных изображений с тонкими сосудами, предлагается гибридный подход к постобработке, сочетающий автоматическую сегментацию с ручными корректировками. Будущие исследования должны быть направлены на повышение точности и решение проблем сегментации в областях с высокой сложностью для дальнейшего повышения диагностической эффективности и сокращения ручного труда в клинических условиях.
253

Совершенствование нейронной сети Unet для сегментации кровеносных сосудов сетчатки : магистерская диссертация / Improving the Unet neural network for retina blood vessel segmentation

Шутков, М. А., Shutkov, M. A. January 2024 (has links)
This study presents the process of development and evaluation of an enhanced neural network model, GCD-UNet, for the segmentation of retinal blood vessels. The methodology involved preprocessing techniques like CLAHE, Gabor, and gray filtering to improve image quality, followed by a modified U-Net architecture incorporating a Dropout layer for better generalization. The model achieved an accuracy of 0.954, an AUC of 0.942, and a Dice coefficient of 0.770. These results indicate significant improvements in vessel pixel identification and overlap with ground truth masks. Despite high recall (0.932), the model's precision (0.562) suggests a need for further optimization to reduce false positives. A hybrid post-processing approach, combining automatic segmentation with manual adjustments, is proposed to enhance segmentation accuracy, particularly for complex images with thin vessels. Future research should focus on refining precision and addressing segmentation challenges in highcomplexity regions to further improve diagnostic efficacy and reduce manual labor in clinical settings. / В данном исследовании представлен процесс разработки и оценки усовершенствованной нейросетевой модели GCD-UNet для сегментации кровеносных сосудов сетчатки. Методология включает в себя такие методы предварительной обработки, как CLAHE, Габор и серая фильтрация для улучшения качества изображения, а затем модифицированную архитектуру U-сети, включающую слой Dropout для лучшего обобщения. Метрика Accuracy составила 0,954, AUC - 0,942, а коэффициент Дайса - 0,770. Эти результаты свидетельствуют о значительном улучшении идентификации пикселей сосудов и их совпадении с масками, полученными в результате исследования. Несмотря на высокий показатель Recall (0,932), точность модели (0,562) говорит о необходимости дальнейшей оптимизации для уменьшения количества ложных срабатываний. Для повышения точности сегментации, особенно для сложных изображений с тонкими сосудами, предлагается гибридный подход к постобработке, сочетающий автоматическую сегментацию с ручными корректировками. Будущие исследования должны быть направлены на повышение точности и решение проблем сегментации в областях с высокой сложностью для дальнейшего повышения диагностической эффективности и сокращения ручного труда в клинических условиях.
254

Segmentation of heterogeneous document images : an approach based on machine learning, connected components analysis, and texture analysis

Bonakdar Sakhi, Omid 06 December 2012 (has links) (PDF)
Document page segmentation is one of the most crucial steps in document image analysis. It ideally aims to explain the full structure of any document page, distinguishing text zones, graphics, photographs, halftones, figures, tables, etc. Although to date, there have been made several attempts of achieving correct page segmentation results, there are still many difficulties. The leader of the project in the framework of which this PhD work has been funded (*) uses a complete processing chain in which page segmentation mistakes are manually corrected by human operators. Aside of the costs it represents, this demands tuning of a large number of parameters; moreover, some segmentation mistakes sometimes escape the vigilance of the operators. Current automated page segmentation methods are well accepted for clean printed documents; but, they often fail to separate regions in handwritten documents when the document layout structure is loosely defined or when side notes are present inside the page. Moreover, tables and advertisements bring additional challenges for region segmentation algorithms. Our method addresses these problems. The method is divided into four parts:1. Unlike most of popular page segmentation methods, we first separate text and graphics components of the page using a boosted decision tree classifier.2. The separated text and graphics components are used among other features to separate columns of text in a two-dimensional conditional random fields framework.3. A text line detection method, based on piecewise projection profiles is then applied to detect text lines with respect to text region boundaries.4. Finally, a new paragraph detection method, which is trained on the common models of paragraphs, is applied on text lines to find paragraphs based on geometric appearance of text lines and their indentations. Our contribution over existing work lies in essence in the use, or adaptation, of algorithms borrowed from machine learning literature, to solve difficult cases. Indeed, we demonstrate a number of improvements : on separating text columns when one is situated very close to the other; on preventing the contents of a cell in a table to be merged with the contents of other adjacent cells; on preventing regions inside a frame to be merged with other text regions around, especially side notes, even when the latter are written using a font similar to that the text body. Quantitative assessment, and comparison of the performances of our method with competitive algorithms using widely acknowledged metrics and evaluation methodologies, is also provided to a large extend.(*) This PhD thesis has been funded by Conseil Général de Seine-Saint-Denis, through the FUI6 project Demat-Factory, lead by Safig SA
255

Satellite Image Processing with Biologically-inspired Computational Methods and Visual Attention

Sina, Md Ibne 27 July 2012 (has links)
The human vision system is generally recognized as being superior to all known artificial vision systems. Visual attention, among many processes that are related to human vision, is responsible for identifying relevant regions in a scene for further processing. In most cases, analyzing an entire scene is unnecessary and inevitably time consuming. Hence considering visual attention might be advantageous. A subfield of computer vision where this particular functionality is computationally emulated has been shown to retain high potential in solving real world vision problems effectively. In this monograph, elements of visual attention are explored and algorithms are proposed that exploit such elements in order to enhance image understanding capabilities. Satellite images are given special attention due to their practical relevance, inherent complexity in terms of image contents, and their resolution. Processing such large-size images using visual attention can be very helpful since one can first identify relevant regions and deploy further detailed analysis in those regions only. Bottom-up features, which are directly derived from the scene contents, are at the core of visual attention and help identify salient image regions. In the literature, the use of intensity, orientation and color as dominant features to compute bottom-up attention is ubiquitous. The effects of incorporating an entropy feature on top of the above mentioned ones are also studied. This investigation demonstrates that such integration makes visual attention more sensitive to fine details and hence retains the potential to be exploited in a suitable context. One interesting application of bottom-up attention, which is also examined in this work, is that of image segmentation. Since low salient regions generally correspond to homogenously textured regions in the input image; a model can therefore be learned from a homogenous region and used to group similar textures existing in other image regions. Experimentation demonstrates that the proposed method produces realistic segmentation on satellite images. Top-down attention, on the other hand, is influenced by the observer’s current states such as knowledge, goal, and expectation. It can be exploited to locate target objects depending on various features, and increases search or recognition efficiency by concentrating on the relevant image regions only. This technique is very helpful in processing large images such as satellite images. A novel algorithm for computing top-down attention is proposed which is able to learn and quantify important bottom-up features from a set of training images and enhances such features in a test image in order to localize objects having similar features. An object recognition technique is then deployed that extracts potential target objects from the computed top-down attention map and attempts to recognize them. An object descriptor is formed based on physical appearance and uses both texture and shape information. This combination is shown to be especially useful in the object recognition phase. The proposed texture descriptor is based on Legendre moments computed on local binary patterns, while shape is described using Hu moment invariants. Several tools and techniques such as different types of moments of functions, and combinations of different measures have been applied for the purpose of experimentations. The developed algorithms are generalized, efficient and effective, and have the potential to be deployed for real world problems. A dedicated software testing platform has been designed to facilitate the manipulation of satellite images and support a modular and flexible implementation of computational methods, including various components of visual attention models.
256

Segmentation and tracking of cells and particles in time-lapse microscopy

Magnusson, Klas E. G. January 2016 (has links)
In biology, many different kinds of microscopy are used to study cells. There are many different kinds of transmission microscopy, where light is passed through the cells, that can be used without staining or other treatments that can harm the cells. There is also fluorescence microscopy, where fluorescent proteins or dyes are placed in the cells or in parts of the cells, so that they emit light of a specific wavelength when they are illuminated with light of a different wavelength. Many fluorescence microscopes can take images on many different depths in a sample and thereby build a three-dimensional image of the sample. Fluorescence microscopy can also be used to study particles, for example viruses, inside cells. Modern microscopes often have digital cameras or other equipment to take images or record time-lapse video. When biologists perform experiments on cells, they often record image sequences or sequences of three-dimensional volumes to see how the cells behave when they are subjected to different drugs, culture substrates, or other external factors. Previously, the analysis of recorded data has often been done manually, but that is very time-consuming and the results often become subjective and hard to reproduce. Therefore there is a great need for technology for automated analysis of image sequences with cells and particles inside cells. Such technology is needed especially in biological research and drug development. But the technology could also be used clinically, for example to tailor a cancer treatment to an individual patient by evaluating different treatments on cells from a biopsy. This thesis presents algorithms to find cells and particles in images, and to calculate tracks that show how they have moved during an experiment. We have developed a complete system that can find and track cells in all commonly used imaging modalities. We selected and extended a number of existing segmentation algorithms, and thereby created a complete tool to find cell outlines. To link the segmented objects into tracks, we developed a new track linking algorithm. The algorithm adds tracks one by one using dynamic programming, and has many advantages over prior algorithms. Among other things, it is fast, it calculates tracks which are optimal for the entire image sequence, and it can handle situations where multiple cells have been segmented incorrectly as one object. To make it possible to use information about the velocities of the objects in the linking, we developed a method where the positions of the objects are preprocessed using a filter before the linking is performed. This is important for tracking of some particles inside cells and for tracking of cell nuclei in some embryos.       We have developed an open source software which contains all tools that are necessary to analyze image sequences with cells or particles. It has tools for segmentation and tracking of objects, optimization of settings, manual correction, and analysis of outlines and tracks. We developed the software together with biologists who used it in their research. The software has already been used for data analysis in a number of biology publications. Our system has also achieved outstanding performance in three international objective comparisons of systems for tracking of cells. / Inom biologi används många olika typer av mikroskopi för att studera celler. Det finns många typer av genomlysningsmikroskopi, där ljus passerar genom cellerna, som kan användas utan färgning eller andra åtgärder som riskerar att skada cellerna. Det finns också fluorescensmikroskopi där fluorescerande proteiner eller färger förs in i cellerna eller i delar av cellerna, så att de emitterar ljus av en viss våglängd då de belyses med ljus av en annan våglängd. Många fluorescensmikroskop kan ta bilder på flera olika djup i ett prov och på så sätt bygga upp en tre-dimensionell bild av provet. Fluorescensmikroskopi kan även användas för att studera partiklar, som exempelvis virus, inuti celler. Moderna mikroskop har ofta digitala kameror eller liknande utrustning för att ta bilder och spela in bildsekvenser. När biologer gör experiment på celler spelar de ofta in bildsekvenser eller sekvenser av tre-dimensionella volymer för att se hur cellerna beter sig när de utsätts för olika läkemedel, odlingssubstrat, eller andra yttre faktorer. Tidigare har analysen av inspelad data ofta gjorts manuellt, men detta är mycket tidskrävande och resultaten blir ofta subjektiva och svåra att reproducera. Därför finns det ett stort behov av teknik för automatiserad analys av bildsekvenser med celler och partiklar inuti celler. Sådan teknik behövs framförallt inom biologisk forskning och utveckling av läkemedel. Men tekniken skulle också kunna användas kliniskt, exempelvis för att skräddarsy en cancerbehandling till en enskild patient genom att utvärdera olika behandlingar på celler från en biopsi. I denna avhandling presenteras algoritmer för att hitta celler och partiklar i bilder, och för att beräkna trajektorier som visar hur de har förflyttat sig under ett experiment. Vi har utvecklat ett komplett system som kan hitta och följa celler i alla vanligt förekommande typer av mikroskopi. Vi valde ut och vidareutvecklade ett antal existerande segmenteringsalgoritmer, och skapade på så sätt ett heltäckande verktyg för att hitta cellkonturer. För att länka ihop de segmenterade objekten till trajektorier utvecklade vi en ny länkningsalgoritm. Algoritmen lägger till trajektorier en och en med hjälp av dynamisk programmering, och har många fördelar jämfört med tidigare algoritmer. Bland annat är den snabb, den beräknar trajektorier som är optimala över hela bildsekvensen, och den kan hantera fall då flera celler felaktigt segmenterats som ett objekt. För att kunna använda information om objektens hastighet vid länkningen utvecklade vi en metod där objektens positioner förbehandlas med hjälp av ett filter innan länkningen utförs. Detta är betydelsefullt för följning av vissa partiklar inuti celler och för följning av cellkärnor i vissa embryon. Vi har utvecklat en mjukvara med öppen källkod, som innehåller alla verktyg som krävs för att analysera bildsekvenser med celler eller partiklar. Den har verktyg för segmentering och följning av objekt, optimering av inställningar, manuell korrektion, och analys av konturer och trajektorier. Vi utvecklade mjukvaran i samarbete med biologer som använde den i sin forskning. Mjukvaran har redan använts för dataanalys i ett antal biologiska publikationer. Vårt system har även uppnått enastående resultat i tre internationella objektiva jämförelser av system för följning av celler. / <p>QC 20161125</p>
257

Perfectionnement des algorithmes d'optimisation par essaim particulaire : applications en segmentation d'images et en électronique / Improvement of particle swarm optimization algorithms : applications in image segmentation and electronics

El Dor, Abbas 05 December 2012 (has links)
La résolution satisfaisante d'un problème d'optimisation difficile, qui comporte un grand nombre de solutions sous-optimales, justifie souvent le recours à une métaheuristique puissante. La majorité des algorithmes utilisés pour résoudre ces problèmes d'optimisation sont les métaheuristiques à population. Parmi celles-ci, nous intéressons à l'Optimisation par Essaim Particulaire (OEP, ou PSO en anglais) qui est apparue en 1995. PSO s'inspire de la dynamique d'animaux se déplaçant en groupes compacts (essaims d'abeilles, vols groupés d'oiseaux, bancs de poissons). Les particules d'un même essaim communiquent entre elles tout au long de la recherche pour construire une solution au problème posé, et ce en s'appuyant sur leur expérience collective. L'algorithme PSO, qui est simple à comprendre, à programmer et à utiliser, se révèle particulièrement efficace pour les problèmes d'optimisation à variables continues. Cependant, comme toutes les métaheuristiques, PSO possède des inconvénients, qui rebutent encore certains utilisateurs. Le problème de convergence prématurée, qui peut conduire les algorithmes de ce type à stagner dans un optimum local, est un de ces inconvénients. L'objectif de cette thèse est de proposer des mécanismes, incorporables à PSO, qui permettent de remédier à cet inconvénient et d'améliorer les performances et l'efficacité de PSO. Nous proposons dans cette thèse deux algorithmes, nommés PSO-2S et DEPSO-2S, pour remédier au problème de la convergence prématurée. Ces algorithmes utilisent des idées innovantes et se caractérisent par de nouvelles stratégies d'initialisation dans plusieurs zones, afin d'assurer une bonne couverture de l'espace de recherche par les particules. Toujours dans le cadre de l'amélioration de PSO, nous avons élaboré une nouvelle topologie de voisinage, nommée Dcluster, qui organise le réseau de communication entre les particules. Les résultats obtenus sur un jeu de fonctions de test montrent l'efficacité des stratégies mises en oeuvre par les différents algorithmes proposés. Enfin, PSO-2S est appliqué à des problèmes pratiques, en segmentation d'images et en électronique / The successful resolution of a difficult optimization problem, comprising a large number of sub optimal solutions, often justifies the use of powerful metaheuristics. A wide range of algorithms used to solve these combinatorial problems belong to the class of population metaheuristics. Among them, Particle Swarm Optimization (PSO), appeared in 1995, is inspired by the movement of individuals in a swarm, like a bee swarm, a bird flock or a fish school. The particles of the same swarm communicate with each other to build a solution to the given problem. This is done by relying on their collective experience. This algorithm, which is easy to understand and implement, is particularly effective for optimization problems with continuous variables. However, like several metaheuristics, PSO shows some drawbacks that make some users avoid it. The premature convergence problem, where the algorithm converges to some local optima and does not progress anymore in order to find better solutions, is one of them. This thesis aims at proposing alternative methods, that can be incorporated in PSO to overcome these problems, and to improve the performance and the efficiency of PSO. We propose two algorithms, called PSO-2S and DEPSO-2S, to cope with the premature convergence problem. Both algorithms use innovative ideas and are characterized by new initialization strategies in several areas to ensure good coverage of the search space by particles. To improve the PSO algorithm, we have also developed a new neighborhood topology, called Dcluster, which can be seen as the communication network between the particles. The obtained experimental results for some benchmark cases show the effectiveness of the strategies implemented in the proposed algorithms. Finally, PSO-2S is applied to real world problems in both image segmentation and electronics fields
258

Processamento e análise de imagens histológicas de pólipos para o auxílio ao diagnóstico de câncer colorretal / Processing and analysis of histological images of polyps to aid in the diagnosis of colorectal cancer

Lopes, Antonio Alex 22 March 2019 (has links)
Segundo o Instituto Nacional do Câncer (INCA), o câncer de colorretal é o terceiro tipo de câncer mais comum entre os homens e o segundo entre as mulheres. Atualmente a avaliação visual feita por um patologista é o principal método utilizado para o diagnóstico de doenças a partir de imagens microscópicas obtidas por meio de amostras em exames convencionais de biópsia. A utilização de técnicas de processamento computacional de imagens possibilita a identificação de elementos e a extração de características, o que contribui com o estudo da organização estrutural dos tecidos e de suas variações patológicas, levando a um aumento da precisão no processo de tomada de decisão. Os conceitos e técnicas envolvendo redes complexas são recursos valiosos para o desenvolvimento de métodos de análise estrutural de componentes em imagens médicas. Dentro dessa perspectiva, o objetivo geral deste trabalho foi o desenvolvimento de um método capaz de realizar o processamento e a análise de imagens obtidas em exames de biópsias de tecidos de pólipo de cólon para classificar o grau de atipia da amostra, que pode variar em: sem atipia, baixo grau, alto grau e câncer. Foram utilizadas técnicas de processamento, incluindo um conjunto de operadores morfológicos, para realizar a segmentação e a identificação de estruturas glandulares. A seguir, procedeu-se à análise estrutural baseada na identificação das glândulas, usando técnicas de redes complexas. As redes foram criadas transformado os núcleos das células que compõem as glândulas em vértices, realizando a ligação dos mesmos com 1 até 20 arestas e a extração de medidas de rede para a criação de um vetor de características. A fim de avaliar comparativamente o método proposto, foram utilizados extratores clássicos de características de imagens, a saber, Descritores de Haralick, Momentos de Hu, Transformada de Hough, e SampEn2D. Após a avaliação do método proposto em diferentes cenários de análise, o valor de acurácia geral obtida pelo mesmo foi de 82.0%, superando os métodos clássicos. Conclui-se que o método proposto para classificação de imagens histológicas de pólipos utilizando análise estrutural baseada em redes complexas mostra-se promissor no sentido de aumentar a acurácia do diagnóstico de câncer colorretal / According to the National Cancer Institute (INCA), colorectal cancer is the third most common cancer among men and the second most common cancer among women. Currently the main method used for the diagnosis of diseases from microscopic images obtained through samples in conventional biopsy tests are the visual evaluation made by a pathologist. The use of computational image processing techniques allows the identification of elements and the extraction of characteristics, which contributes to the study of the structural organization of tissues and their pathological variations, leading to an increase of precision in the decision making process. Concepts and techniques involving complex networks are valuable resources for the development of structural analysis methods of components in medical images. In this perspective, the general objective of this work was the development of a method capable of performing the image processing and analysis obtained in biopsies of colon polyp tissue to classify the degree of atypia of the sample, which may vary in: without atypia, low grade, high grade and cancer. Processing techniques including a set of morphological operators, were used to perform the segmentation and identification of glandular structures. Next, structural analysis was performed based on glands identification, using complex network techniques.The networks were created transforming the core of the cells that make up the glands in vertices, making the connection of the same with 1 to 20 edges and the extraction of network measurements to create a vector of characteristics. In order to comparatively evaluate the proposed method, classical image characteristic extractors were used, namely, Haralicks Descriptors, Hus Moments, Hough Transform, and SampEn2D. After the evaluation of the proposed method in different analysis scenarios, the overall accuracy value obtained by it was 82.0%, surpassing the classical methods. It is concluded that the proposed method for the classification of histological images of polyps using structural analysis based on complex networks is promising in order to increase the accuracy of the diagnosis of colorectal cancer
259

Bone Fragment Segmentation Using Deep Interactive Object Selection

Estgren, Martin January 2019 (has links)
In recent years semantic segmentation models utilizing Convolutional Neural Networks (CNN) have seen significant success for multiple different segmentation problems. Models such as U-Net have produced promising results within the medical field for both regular 2D and volumetric imaging, rivalling some of the best classical segmentation methods. In this thesis we examined the possibility of using a convolutional neural network-based model to perform segmentation of discrete bone fragments in CT-volumes with segmentation-hints provided by a user. We additionally examined different classical segmentation methods used in a post-processing refinement stage and their effect on the segmentation quality. We compared the performance of our model to similar approaches and provided insight into how the interactive aspect of the model affected the quality of the result. We found that the combined approach of interactive segmentation and deep learning produced results on par with some of the best methods presented, provided there were adequate amount of annotated training data. We additionally found that the number of segmentation hints provided to the model by the user significantly affected the quality of the result, with convergence of the result around 8 provided hints.
260

Segmentação semiautomática de conjuntos completos de imagens do ventrículo esquerdo / Semiautomatic segmentation of left ventricle in full sets of cardiac images

Torres, Rafael Siqueira 05 April 2017 (has links)
A área médica tem se beneficiado das ferramentas construídas pela Computação e, ao mesmo tempo, tem impulsionado o desenvolvimento de novas técnicas em diversas especialidades da Computação. Dentre estas técnicas a segmentação tem como objetivo separar em uma imagem objetos de interesse, podendo chamar a atenção do profissional de saúde para áreas de relevância ao diagnóstico. Além disso, os resultados da segmentação podem ser utilizados para a reconstrução de modelos tridimensionais, que podem ter características extraídas que auxiliem o médico em tomadas de decisão. No entanto, a segmentação de imagens médicas ainda é um desafio, por ser extremamente dependente da aplicação e das estruturas de interesse presentes na imagem. Esta dissertação apresenta uma técnica de segmentação semiautomática do endocárdio do ventrículo esquerdo em conjuntos de imagens cardíacas de Ressonância Magnética Nuclear. A principal contribuição é a segmentação considerando todas as imagens provenientes de um exame, por meio da propagação dos resultados obtidos em imagens anteriormente processadas. Os resultados da segmentação são avaliados usando-se métricas objetivas como overlap, entre outras, comparando com imagens fornecidas por especialistas na área de Cardiologia / The medical field has been benefited from the tools built by Computing and has promote the development of new techniques in diverse Computer specialties. Among these techniques, the segmentation aims to divide an image into interest objects, leading the attention of the specialist to areas that are relevant in diagnosys. In addition, segmentation results can be used for the reconstruction of three-dimensional models, which may have extracted features that assist the physician in decision making. However, the segmentation of medical images is still a challenge because it is extremely dependent on the application and structures of interest present in the image. This dissertation presents a semiautomatic segmentation technique of the left ventricular endocardium in sets of cardiac images of Nuclear Magnetic Resonance. The main contribution is the segmentation considering all the images coming from an examination, through the propagation of the results obtained in previously processed images. Segmentation results are evaluated using objective metrics such as overlap, among others, compared to images provided by specialists in the Cardiology field

Page generated in 0.1232 seconds