• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1475
  • 473
  • 437
  • 372
  • 104
  • 74
  • 68
  • 34
  • 33
  • 32
  • 28
  • 26
  • 21
  • 18
  • 10
  • Tagged with
  • 3658
  • 1091
  • 748
  • 488
  • 458
  • 440
  • 418
  • 390
  • 389
  • 348
  • 344
  • 327
  • 319
  • 317
  • 315
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Automatic Tissue Segmentation of Volumetric CT Data of the Pelvic Region

Jeuthe, Julius January 2017 (has links)
Automatic segmentation of human organs allows more accurate calculation of organ doses in radiationtreatment planning, as it adds prior information about the material composition of imaged tissues. For instance, the separation of tissues into bone, adipose tissue and remaining soft tissues allows to use tabulated material compositions of those tissues. This approximation is not perfect because of variability of tissue composition among patients, but is still better than no approximation at all. Another use for automated tissue segmentationis in model based iterative reconstruction algorithms. An example of such an algorithm is DIRA, which is developed at the Medical Radiation Physics and the Center for Medical Imaging Science and Visualization(CMIV) at Linköpings University. DIRA uses dual-energy computed tomography (DECT) data to decompose patient tissues into two or three base components. So far DIRA has used the MK2014 algorithm which segments human pelvis into bones, adipose tissue, gluteus maximus muscles and the prostate. One problem was that MK2014 was limited to 2D and it was not very robust. Aim: The aim of this thesis work was to extend the MK2014 to 3D as well as to improve it. The task was structured to the following activities: selection of suitable segmentation algorithms, evaluation of their results and combining of those to an automated segmentation algorithm. Of special interest was image registration usingthe Morphon. Methods: Several different algorithms were tested.  For instance: Otsu's method followed by threshold segmentation; histogram matching followed by threshold segmentation, region growing and hole-filling; affine phase-based registration and the Morphon. The best-performing algorithms were combined into the newly developed JJ2016. Results: For the segmentation of adipose tissue and the bones in the eight investigated data sets, the JJ2016 algorithm gave better results than the MK2014. The better results of the JJ2016 were achieved by: (i) a new segmentation algorithm for adipose tissue which was not affected by the amount of air surrounding the patient and segmented smaller regions of adipose tissue and (ii) a new filling algorithm for connecting segments of compact bone. The JJ2016 algorithm also estimates a likely position for the prostate and the rectum by combining linear and non-linear phase-based registration for atlas based segmentation. The estimated position (center point) was in most cases close to the true position of the organs. Several deficiencies of the MK2014 algorithm were removed but the improved version (MK2014v2) did not perform as well as the JJ2016. Conclusions: JJ2016 performed well for all data sets. The JJ2016 algorithm is usable for the intended application, but is (without further improvements) too slow for interactive usage. Additionally, a validation of the algorithm for clinical use should be performed on a larger number of data sets, covering the variability of patients in shape and size.
122

Segmentação de cenas em telejornais: uma abordagem multimodal / Scene segmentation in news programs: a multimodal approach

Danilo Barbosa Coimbra 11 April 2011 (has links)
Este trabalho tem como objetivo desenvolver um método de segmentação de cenas em vídeos digitais que trate segmentos semânticamente complexos. Como prova de conceito, é apresentada uma abordagem multimodal que utiliza uma definição mais geral para cenas em telejornais, abrangendo tanto cenas onde âncoras aparecem quanto cenas onde nenhum âncora aparece. Desse modo, os resultados obtidos da técnica multimodal foram signifiativamente melhores quando comparados com os resultados obtidos das técnicas monomodais aplicadas em separado. Os testes foram executados em quatro grupos de telejornais brasileiros obtidos de duas emissoras de TV diferentes, cada qual contendo cinco edições, totalizando vinte telejornais / This work aims to develop a method for scene segmentation in digital video which deals with semantically complex segments. As proof of concept, we present a multimodal approach that uses a more general definition for TV news scenes, covering both: scenes where anchors appear on and scenes where no anchor appears. The results of the multimodal technique were significantly better when compared with the results from monomodal techniques applied separately. The tests were performed in four groups of Brazilian news programs obtained from two different television stations, containing five editions each, totaling twenty newscasts
123

Robust Background Segmentation For Use in Real-time Application : A study on using available foreground-background segmentation research for real-world application / Robust bakgrundssegmentering för använding i realtids-applikation

Brynielsson, Emil January 2023 (has links)
In a world reliant on big industries to produce large quantities of more or less every product used, it is of utmost importance that the machines in such industries continue to run with minimum amounts of downtime. One way more and more providers of such industrial machines try to help their customers reduce downtime when a machine stops working or needs maintenance is through the use of remote guidance; a way of knowledge transfer from a technician to a regular employee that aims to allow the regular employee to be guided in real-time by a technician to solve the task himself, thus, not needing the technician to travel to the factory.  One technology that may come to mind if you were to create such a guiding system is to use augmented reality and maybe have a technician record his or her hand and in real-time overlay this upon the videostream the onsite employee sees. This is something available today, however, to separate the hand of the technician from the background can be a complex task especially if the background is not a single colour or the hand has a similar colour to the background. These kinds of limitations to the background separation are what this thesis aims to find a solution to. This thesis addresses this challenge by creating a test dataset containing five different background scenarios that are deemed representative of what a person who would use the product most likely can find something similar to without going out of their way. In each of the five scenarios, there are two videos taken, one with a white hand and one with a hand wearing a black glove. Then a machine learning model is trained in a couple of different configurations and tested on the test scenarios. The best of the models is later also tried to run directly on a mobile phone. It was found that the machine learning model achieved rather promising background segmentation and running on the computer with a dedicated GPU real-time performance was achievable. However, running on the mobile device the processing time proved to be not sufficient.
124

Fusion d'informations par la théorie de l'évidence pour la segmentation d'images / Information fusion using theory of evidence for image segmentation

Chahine, Chaza 31 October 2016 (has links)
La fusion d’informations a été largement étudiée dans le domaine de l’intelligence artificielle. Une information est en général considérée comme imparfaite. Par conséquent, la combinaison de plusieurs sources d’informations (éventuellement hétérogènes) peut conduire à une information plus globale et complète. Dans le domaine de la fusion on distingue généralement les approches probabilistes et non probabilistes dont fait partie la théorie de l’évidence, développée dans les années 70. Cette méthode permet de représenter à la fois, l’incertitude et l’imprécision de l’information, par l’attribution de fonctions de masses qui s’appliquent non pas à une seule hypothèse (ce qui est le cas le plus courant pour les méthodes probabilistes) mais à un ensemble d’hypothèses. Les travaux présentés dans cette thèse concernent la fusion d’informations pour la segmentation d’images.Pour développer cette méthode nous sommes partis de l’algorithme de la « Ligne de Partage des Eaux » (LPE) qui est un des plus utilisés en détection de contours. Intuitivement le principe de la LPE est de considérer l’image comme un relief topographique où la hauteur d’un point correspond à son niveau de gris. On suppose alors que ce relief se remplit d’eau par des sources placées au niveau des minima locaux de l’image, formant ainsi des bassins versants. Les LPE sont alors les barrages construits pour empêcher les eaux provenant de différents bassins de se mélanger. Un problème de cette méthode de détection de contours est que la LPE directement appliquée sur l’image engendre une sur-segmentation, car chaque minimum local engendre une région. Meyer et Beucher ont proposé de résoudre cette question en spécifiant un ensemble de marqueurs qui seront les seules sources d’inondation du relief. L'extraction automatique des marqueurs à partir des images ne conduit pas toujours à un résultat satisfaisant, en particulier dans le cas d'images complexes. Plusieurs méthodes ont été proposées pour déterminer automatiquement ces marqueurs.Nous nous sommes en particulier intéressés à l’approche stochastique d’Angulo et Jeulin qui estiment une fonction de densité de probabilité (fdp) d'un contour (LPE) après M simulations de la segmentation LPE classique. N marqueurs sont choisis aléatoirement pour chaque réalisation. Par conséquent, une valeur de fdp élevée est attribuée aux points de contours correspondant aux fortes réalisations. Mais la décision d’appartenance d’un point à la « classe contour » reste dépendante d’une valeur de seuil. Un résultat unique ne peut donc être obtenu.Pour augmenter la robustesse de cette méthode et l’unicité de sa réponse, nous proposons de combiner des informations grâce à la théorie de l’évidence.La LPE se calcule généralement à partir de l’image gradient, dérivée du premier ordre, qui donne une information globale sur les contours dans l’image. Alors que la matrice Hessienne, matrice des dérivées d’ordre secondaire, donne une information plus locale sur les contours. Notre objectif est donc de combiner ces deux informations de nature complémentaire en utilisant la théorie de l’évidence. Les différentes versions de la fusion sont testées sur des images réelles de la base de données Berkeley. Les résultats sont comparés avec cinq segmentations manuelles fournies, en tant que vérités terrain, avec cette base de données. La qualité des segmentations obtenues par nos méthodes sont fondées sur différentes mesures: l’uniformité, la précision, l’exactitude, la spécificité, la sensibilité ainsi que la distance métrique de Hausdorff / Information fusion has been widely studied in the field of artificial intelligence. Information is generally considered imperfect. Therefore, the combination of several sources of information (possibly heterogeneous) can lead to a more comprehensive and complete information. In the field of fusion are generally distinguished probabilistic approaches and non-probabilistic ones which include the theory of evidence, developed in the 70s. This method represents both the uncertainty and imprecision of the information, by assigning masses not only to a hypothesis (which is the most common case for probabilistic methods) but to a set of hypothesis. The work presented in this thesis concerns the fusion of information for image segmentation.To develop this method we start with the algorithm of Watershed which is one of the most used methods for edge detection. Intuitively the principle of the Watershed is to consider the image as a landscape relief where heights of the different points are associated with grey levels. Assuming that the local minima are pierced with holes and the landscape is immersed in a lake, the water filled up from these minima generate the catchment basins, whereas watershed lines are the dams built to prevent mixing waters coming from different basins.The watershed is practically applied to the gradient magnitude, and a region is associated with each minimum. Therefore the fluctuations in the gradient image and the great number of local minima generate a large set of small regions yielding an over segmented result which can hardly be useful. Meyer and Beucher proposed seeded watershed or marked-controlled watershed to surmount this oversegmentation problem. The essential idea of the method is to specify a set of markers (or seeds) to be considered as the only minima to be flooded by water. The number of detected objects is therefore equal to the number of seeds and the result is then markers dependent. The automatic extraction of markers from the images does not lead to a satisfying result especially in the case of complex images. Several methods have been proposed for automatically determining these markers.We are particularly interested in the stochastic approach of Angulo and Jeulin who calculate a probability density function (pdf) of contours after M simulations of segmentation using conventional watershed with N markers randomly selected for each simulation. Therefore, a high pdf value is assigned to strong contour points that are more detected through the process. But the decision that a point belong to the "contour class" remains dependent on a threshold value. A single result cannot be obtained.To increase the robustness of this method and the uniqueness of its response, we propose to combine information with the theory of evidence.The watershed is generally calculated on the gradient image, first order derivative, which gives comprehensive information on the contours in the image.While the Hessian matrix, matrix of second order derivatives, gives more local information on the contours. Our goal is to combine these two complementary information using the theory of evidence. The method is tested on real images from the Berkeley database. The results are compared with five manual segmentation provided as ground truth, with this database. The quality of the segmentation obtained by our methods is tested with different measures: uniformity, precision, recall, specificity, sensitivity and the Hausdorff metric distance
125

Comparative Studies of Contouring Algorithms for Cardiac Image Segmentation

Ali, Syed Farooq January 2011 (has links)
No description available.
126

Etude de la méthode de Boltzmann sur réseau pour la segmentation d'anévrismes cérébraux / Study of the lattice Boltzmann method application to cerebral aneurysm segmentation

Wang, Yan 25 July 2014 (has links)
L'anévrisme cérébral est une région fragile de la paroi d'un vaisseau sanguin dans le cerveau, qui peut se rompre et provoquer des saignements importants et des accidents vasculaires cérébraux. La segmentation de l'anévrisme cérébral est une étape primordiale pour l'aide au diagnostic, le traitement et la planification chirurgicale. Malheureusement, la segmentation manuelle prend encore une part importante dans l'angiographie clinique et elle est devenue couteuse en temps de traitement étant donné la gigantesque quantité de données générées par les systèmes d'imagerie médicale. Les méthodes de segmentation automatique d'image constituent un moyen essentiel pour faciliter et accélérer l'examen clinique et pour réduire l'interaction manuelle et la variabilité inter-opérateurs. L'objectif principal de ce travail de thèse est de développer des méthodes automatiques pour la segmentation et la mesure des anévrismes. Le présent travail de thèse est constitué de trois parties principales. La première partie concerne la segmentation des anévrismes géants qui contiennent à la fois la lumière et le thrombus. La méthode consiste d'abord à extraire la lumière et le thrombus en utilisant une procédure en deux étapes, puis à affiner la forme du thrombus à l'aide de la méthode des courbes de niveaux. Dans cette partie, la méthode proposée est également comparée à la segmentation manuelle, démontrant sa bonne précision. La deuxième partie concerne une approche LBM pour la segmentation des vaisseaux dans des images 2D+t et de l'anévrisme cérébral dans les images en 3D. La dernière partie étudie un modèle de segmentation 4D en considérant les images 3D+t comme un hypervolume 4D et en utilisant un réseau LBM D4Q81, dans lequel le temps est considéré de la même manière que les trois autres dimensions pour la définition des directions de mouvement des particules dans la LBM, considérant les données 3D+t comme un hypervolume 4D et en utilisant un réseau LBM D4Q81. Des expériences sont réalisées sur des images synthétiques d'hypercube 4D et d'hypersphere 4D. La valeur de Dice sur l'image de l'hypercube avec et sans bruit montre que la méthode proposée est prometteuse pour la segmentation 4D et le débruitage. / Cerebral aneurysm is a fragile area on the wall of a blood vessel in the brain, which can rupture and cause major bleeding and cerebrovascular accident. The segmentation of cerebral aneurysm is a primordial step for diagnosis assistance, treatment and surgery planning. Unfortunately, manual segmentation is still an important part in clinical angiography but has become a burden given the huge amount of data generated by medical imaging systems. Automatic image segmentation techniques provides an essential way to easy and speed up clinical examinations, reduce the amount of manual interaction and lower inter operator variability. The main purpose of this PhD work is to develop automatic methods for cerebral aneurysm segmentation and measurement. The present work consists of three main parts. The first part deals with giant aneurysm segmentation containing lumen and thrombus. The methodology consists of first extracting the lumen and thrombus using a two-step procedure based on the LBM, and then refining the shape of the thrombus using level set technique. In this part the proposed method is also compared with manual segmentation, demonstrating its good segmentation accuracy. The second part concerns a LBM approach to vessel segmentation in 2D+t images and to cerebral aneurysm segmentation in 3D medical images through introducing a LBM D3Q27 model, which allows achieving a good segmentation and high robustness to noise. The last part investigates a true 4D segmentation model by considering the 3D+t data as a 4D hypervolume and using a D4Q81 lattice in LBM where time is considered in the same manner as for other three dimensions for the definition of particle moving directions in the LBM model.
127

Segmentação semiautomática de conjuntos completos de imagens do ventrículo esquerdo / Semiautomatic segmentation of left ventricle in full sets of cardiac images

Torres, Rafael Siqueira 05 April 2017 (has links)
A área médica tem se beneficiado das ferramentas construídas pela Computação e, ao mesmo tempo, tem impulsionado o desenvolvimento de novas técnicas em diversas especialidades da Computação. Dentre estas técnicas a segmentação tem como objetivo separar em uma imagem objetos de interesse, podendo chamar a atenção do profissional de saúde para áreas de relevância ao diagnóstico. Além disso, os resultados da segmentação podem ser utilizados para a reconstrução de modelos tridimensionais, que podem ter características extraídas que auxiliem o médico em tomadas de decisão. No entanto, a segmentação de imagens médicas ainda é um desafio, por ser extremamente dependente da aplicação e das estruturas de interesse presentes na imagem. Esta dissertação apresenta uma técnica de segmentação semiautomática do endocárdio do ventrículo esquerdo em conjuntos de imagens cardíacas de Ressonância Magnética Nuclear. A principal contribuição é a segmentação considerando todas as imagens provenientes de um exame, por meio da propagação dos resultados obtidos em imagens anteriormente processadas. Os resultados da segmentação são avaliados usando-se métricas objetivas como overlap, entre outras, comparando com imagens fornecidas por especialistas na área de Cardiologia / The medical field has been benefited from the tools built by Computing and has promote the development of new techniques in diverse Computer specialties. Among these techniques, the segmentation aims to divide an image into interest objects, leading the attention of the specialist to areas that are relevant in diagnosys. In addition, segmentation results can be used for the reconstruction of three-dimensional models, which may have extracted features that assist the physician in decision making. However, the segmentation of medical images is still a challenge because it is extremely dependent on the application and structures of interest present in the image. This dissertation presents a semiautomatic segmentation technique of the left ventricular endocardium in sets of cardiac images of Nuclear Magnetic Resonance. The main contribution is the segmentation considering all the images coming from an examination, through the propagation of the results obtained in previously processed images. Segmentation results are evaluated using objective metrics such as overlap, among others, compared to images provided by specialists in the Cardiology field
128

Segmentace trhu rodinných domů / Segmentation of the Single-Family Housing Market

Bondareva, Anna January 2010 (has links)
The main goal of this thesis is to test the possibility of forward and backward segmentation on the single-family housing market, to reveal, describe and develop a profile of segments and to suggest marketing recommendations. Data from primary research is encoded in MS Excel and processed in the statistical analysis program SPSS Statistics 19 for Windows. The output of the thesis reveals three forward segments and three backward segments of the market. Based on certain specifics shown by each of the monitored segments, I suggested numerous marketing recommendations.
129

Segmentação semiautomática de conjuntos completos de imagens do ventrículo esquerdo / Semiautomatic segmentation of left ventricle in full sets of cardiac images

Rafael Siqueira Torres 05 April 2017 (has links)
A área médica tem se beneficiado das ferramentas construídas pela Computação e, ao mesmo tempo, tem impulsionado o desenvolvimento de novas técnicas em diversas especialidades da Computação. Dentre estas técnicas a segmentação tem como objetivo separar em uma imagem objetos de interesse, podendo chamar a atenção do profissional de saúde para áreas de relevância ao diagnóstico. Além disso, os resultados da segmentação podem ser utilizados para a reconstrução de modelos tridimensionais, que podem ter características extraídas que auxiliem o médico em tomadas de decisão. No entanto, a segmentação de imagens médicas ainda é um desafio, por ser extremamente dependente da aplicação e das estruturas de interesse presentes na imagem. Esta dissertação apresenta uma técnica de segmentação semiautomática do endocárdio do ventrículo esquerdo em conjuntos de imagens cardíacas de Ressonância Magnética Nuclear. A principal contribuição é a segmentação considerando todas as imagens provenientes de um exame, por meio da propagação dos resultados obtidos em imagens anteriormente processadas. Os resultados da segmentação são avaliados usando-se métricas objetivas como overlap, entre outras, comparando com imagens fornecidas por especialistas na área de Cardiologia / The medical field has been benefited from the tools built by Computing and has promote the development of new techniques in diverse Computer specialties. Among these techniques, the segmentation aims to divide an image into interest objects, leading the attention of the specialist to areas that are relevant in diagnosys. In addition, segmentation results can be used for the reconstruction of three-dimensional models, which may have extracted features that assist the physician in decision making. However, the segmentation of medical images is still a challenge because it is extremely dependent on the application and structures of interest present in the image. This dissertation presents a semiautomatic segmentation technique of the left ventricular endocardium in sets of cardiac images of Nuclear Magnetic Resonance. The main contribution is the segmentation considering all the images coming from an examination, through the propagation of the results obtained in previously processed images. Segmentation results are evaluated using objective metrics such as overlap, among others, compared to images provided by specialists in the Cardiology field
130

Automated and interactive approaches for optimal surface finding based segmentation of medical image data

Sun, Shanhui 01 December 2012 (has links)
Optimal surface finding (OSF), a graph-based optimization approach to image segmentation, represents a powerful framework for medical image segmentation and analysis. In many applications, a pre-segmentation is required to enable OSF graph construction. Also, the cost function design is critical for the success of OSF. In this thesis, two issues in the context of OSF segmentation are addressed. First, a robust model-based segmentation method suitable for OSF initialization is introduced. Second, an OSF-based segmentation refinement approach is presented. For segmenting complex anatomical structures (e.g., lungs), a rough initial segmentation is required to apply an OSF-based approach. For this purpose, a novel robust active shape model (RASM) is presented. The RASM matching in combination with OSF is investigated in the context of segmenting lungs with large lung cancer masses in 3D CT scans. The robustness and effectiveness of this approach is demonstrated on 30 lung scans containing 20 normal lungs and 40 diseased lungs where conventional segmentation methods frequently fail to deliver usable results. The developed RASM approach is generally applicable and suitable for large organs/structures. While providing high levels of performance in most cases, OSF-based approaches may fail in a local region in the presence of pathology or other local challenges. A new (generic) interactive refinement approach for correcting local segmentation errors based on the OSF segmentation framework is proposed. Following the automated segmentation, the user can inspect the result and correct local or regional segmentation inaccuracies by (iteratively) providing clues regarding the location of the correct surface. This expert information is utilized to modify the previously calculated cost function, locally re-optimizing the underlying modified graph without a need to start the new optimization from scratch. For refinement, a hybrid desktop/virtual reality user interface based on stereoscopic visualization technology and advanced interaction techniques is utilized for efficient interaction with the segmentations (surfaces). The proposed generic interactive refinement method is adapted to three applications. First, two refinement tools for 3D lung segmentation are proposed, and the performance is assessed on 30 test cases from 18 CT lung scans. Second, in a feasibility study, the approach is expanded to 4D OSF-based lung segmentation refinement and an assessment of performance is provided. Finally, a dual-surface OSF-based intravascular ultrasound (IVUS) image segmentation framework is introduced, application specific segmentation refinement methods are developed, and an evaluation on 41 test cases is presented. As demonstrated by experiments, OSF-based segmentation refinement is a promising approach to address challenges in medical image segmentation.

Page generated in 0.0818 seconds