• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Detekce a segmentace mozkového nádoru v multisekvenčním MRI / Brain Tumor Detection and Segmentation in Multisequence MRI

Dvořák, Pavel January 2015 (has links)
Tato práce se zabývá detekcí a segmentací mozkového nádoru v multisekvenčních MR obrazech se zaměřením na gliomy vysokého a nízkého stupně malignity. Jsou zde pro tento účel navrženy tři metody. První metoda se zabývá detekcí prezence částí mozkového nádoru v axiálních a koronárních řezech. Jedná se o algoritmus založený na analýze symetrie při různých rozlišeních obrazu, který byl otestován na T1, T2, T1C a FLAIR obrazech. Druhá metoda se zabývá extrakcí oblasti celého mozkového nádoru, zahrnující oblast jádra tumoru a edému, ve FLAIR a T2 obrazech. Metoda je schopna extrahovat mozkový nádor z 2D i 3D obrazů. Je zde opět využita analýza symetrie, která je následována automatickým stanovením intenzitního prahu z nejvíce asymetrických částí. Třetí metoda je založena na predikci lokální struktury a je schopna segmentovat celou oblast nádoru, jeho jádro i jeho aktivní část. Metoda využívá faktu, že většina lékařských obrazů vykazuje vysokou podobnost intenzit sousedních pixelů a silnou korelaci mezi intenzitami v různých obrazových modalitách. Jedním ze způsobů, jak s touto korelací pracovat a používat ji, je využití lokálních obrazových polí. Podobná korelace existuje také mezi sousedními pixely v anotaci obrazu. Tento příznak byl využit v predikci lokální struktury při lokální anotaci polí. Jako klasifikační algoritmus je v této metodě použita konvoluční neuronová síť vzhledem k její známe schopnosti zacházet s korelací mezi příznaky. Všechny tři metody byly otestovány na veřejné databázi 254 multisekvenčních MR obrazech a byla dosáhnuta přesnost srovnatelná s nejmodernějšími metodami v mnohem kratším výpočetním čase (v řádu sekund při použitý CPU), což poskytuje možnost manuálních úprav při interaktivní segmetaci.
522

Rozpoznávání topologických informací z plánu křižovatky / Topology Recognition from Crossroad Plan

Huták, Petr January 2016 (has links)
This master‘s thesis describes research, design and development of system for topology recognition from crossroad plan. It explains the methods used for image processing, image segmentation, object recognition. It describes approaches in processing of maps represented by raster images and target software, in which the final product of practical part of project will be integrated. Thesis is focused mainly on comparison of different approaches in feature extraction from raster maps and determination their semantic meaning. Practical part of project is implemented in C# language with OpenCV library.
523

Interaktivní segmentace popředí/pozadí na mobilním telefonu / Interactive Foreground/Background Segmentation on Mobile Phone

Studený, Petr January 2015 (has links)
This thesis deals with the problem of foreground extraction on mobile devices. The main goal of this project is to find or design segmentation methods for separating a user-selected object from an image (or video). The main requirement of these methods is the image processing time and segmentation quality. Some existing solutions of this problem are mentioned and their usability on mobile devices is discussed. A mobile application is created within the project, demonstrating the implemented real time foreground extraction algorithm.
524

Detekce a identifikace typu obratle v CT datech onkologických pacientů / Vertebra detection and identification in CT oncological data

Věžníková, Romana January 2017 (has links)
Automated spine or vertebra detection and segmentation from CT images is a difficult task for several reasons. One of the reasons is unclear vertebra boundaries and indistinct boundaries between vertebra. Next reason is artifacts in images and high degree of anatomical complexity. This paper describes the design and implementation of vertebra detection and classification in CT images of cancer patients, which adds to the complexity because some of vertebrae are deformed. For the vertebra segmentation, the Otsu’s method is used. Vertebra detection is based on search of borders between individual vertebra in sagittal planes. Decision trees or the generalized Hough transform is applied for the identification whereas the vertebra searching is based on similarity between each vertebra model shape and planes of CT scans.
525

A Real-Time and Automatic Ultrasound-Enhanced Multimodal Second Language Training System: A Deep Learning Approach

Mozaffari Maaref, Mohammad Hamed 08 May 2020 (has links)
The critical role of language pronunciation in communicative competence is significant, especially for second language learners. Despite renewed awareness of the importance of articulation, it remains a challenge for instructors to handle the pronunciation needs of language learners. There are relatively scarce pedagogical tools for pronunciation teaching and learning, such as inefficient, traditional pronunciation instructions like listening and repeating. Recently, electronic visual feedback (EVF) systems (e.g., medical ultrasound imaging) have been exploited in new approaches in such a way that they could be effectively incorporated in a range of teaching and learning contexts. Evaluation of ultrasound-enhanced methods for pronunciation training, such as multimodal methods, has asserted that visualizing articulator’s system as biofeedback to language learners might improve the efficiency of articulation learning. Despite the recent successful usage of multimodal techniques for pronunciation training, manual works and human manipulation are inevitable in many stages of those systems. Furthermore, recognizing tongue shape in noisy and low-contrast ultrasound images is a challenging job, especially for non-expert users in real-time applications. On the other hand, our user study revealed that users could not perceive the placement of their tongue inside the mouth comfortably just by watching pre-recorded videos. Machine learning is a subset of Artificial Intelligence (AI), where machines can learn by experiencing and acquiring skills without human involvement. Inspired by the functionality of the human brain, deep artificial neural networks learn from large amounts of data to perform a task repeatedly. Deep learning-based methods in many computer vision tasks have emerged as the dominant paradigm in recent years. Deep learning methods are powerful in automatic learning of a new job, while unlike traditional image processing methods, they are capable of dealing with many challenges such as object occlusion, transformation variant, and background artifacts. In this dissertation, we implemented a guided language pronunciation training system, benefits from the strengths of deep learning techniques. Our modular system attempts to provide a fully automatic and real-time language pronunciation training tool using ultrasound-enhanced augmented reality. Qualitatively and quantitatively assessments indicate an exceptional performance for our system in terms of flexibility, generalization, robustness, and autonomy outperformed previous techniques. Using our ultrasound-enhanced system, a language learner can observe her/his tongue movements during real-time speech, superimposed on her/his face automatically.
526

Cartographie, analyse et reconnaissance de réseaux vasculaires par Doppler ultrasensible 4D / Cartography, analysis and recognition of vascular networks by 4D ultrasensitive Doppler

Cohen, Emmanuel 19 December 2018 (has links)
Le Doppler ultrasensible est une nouvelle technique d'imagerie ultrasonore permettant d'observer les flux sanguins avec une résolution très fine et sans agent de contraste. Appliquée à l'imagerie microvasculaire cérébrale des rongeurs, cette méthode produit de très fines cartes vasculaires 3D du cerveau à haute résolution spatiale. Ces réseaux vasculaires contiennent des structures tubulaires caractéristiques qui pourraient servir de points de repère pour localiser la position de la sonde ultrasonore et tirer parti des avantages pratiques des appareils à ultrason. Ainsi, nous avons développé un premier système de neuronavigation chez les rongeurs basé sur le recalage automatique d'images cérébrales. En utilisant des méthodes d’extraction de chemins minimaux, nous avons développé une nouvelle méthode isotrope de segmentation pour l’analyse géométrique des réseaux vasculaires en 3D. Cette méthode a été appliquée à la quantification des réseaux vasculaires et a permis le développement d'algorithmes de recalage de nuages de points pour le suivi temporel de tumeurs. / Ultrasensitive Doppler is a new ultrasound imaging technique allowing the observation of blood flows with a very fine resolution and no contrast agent. Applied to cerebral microvascular imaging in rodents, this method produces very fine vascular 3D maps of the brain at high spatial resolution. These vascular networks contain characteristic tubular structures that could be used as landmarks to localize the position of the ultrasonic probe and take advantage of the easy-to-use properties of ultrasound devices such as low cost and portability. Thus, we developed a first neuronavigation system in rodents based on automatic registration of brain images. Using minimal path extraction methods, we developed a new isotropic segmentation framework for 3D geometric analysis of vascular networks (extraction of centrelines, diameters, curvatures, bifurcations). This framework was applied to quantify brain and tumor vascular networks, and finally leads to the development of point cloud registration algorithms for temporal monitoring of tumors.
527

Représentation d'images hiérarchique multi-critère / Hierarchical multi-feature image representation

Randrianasoa, Tianatahina Jimmy Francky 08 December 2017 (has links)
La segmentation est une tâche cruciale en analyse d’images. L’évolution des capteurs d’acquisition induit de nouvelles images de résolution élevée, contenant des objets hétérogènes. Il est aussi devenu courant d’obtenir des images d’une même scène à partir de plusieurs sources. Ceci rend difficile l’utilisation des méthodes de segmentation classiques. Les approches de segmentation hiérarchiques fournissent des solutions potentielles à ce problème. Ainsi, l’Arbre Binaire de Partitions (BPT) est une structure de données représentant le contenu d’une image à différentes échelles. Sa construction est généralement mono-critère (i.e. une image, une métrique) et fusionne progressivement des régions connexes similaires. Cependant, la métrique doit être définie a priori par l’utilisateur, et la gestion de plusieurs images se fait en regroupant de multiples informations issues de plusieurs bandes spectrales dans une seule métrique. Notre première contribution est une approche pour la construction multicritère d’un BPT. Elle établit un consensus entre plusieurs métriques, permettant d’obtenir un espace de segmentation hiérarchique unifiée. Par ailleurs, peu de travaux se sont intéressés à l’évaluation de ces structures hiérarchiques. Notre seconde contribution est une approche évaluant la qualité des BPTs en se basant sur l’analyse intrinsèque et extrinsèque, suivant des exemples issus de vérités-terrains. Nous discutons de l’utilité de cette approche pour l’évaluation d’un BPT donné mais aussi de la détermination de la combinaison de paramètres adéquats pour une application précise. Des expérimentations sur des images satellitaires mettent en évidence la pertinence de ces approches en segmentation d’images. / Segmentation is a crucial task in image analysis. Novel acquisition devices bring new images with higher resolutions, containing more heterogeneous objects. It becomes also easier to get many images of an area from different sources. This phenomenon is encountered in many domains (e.g. remote sensing, medical imaging) making difficult the use of classical image segmentation methods. Hierarchical segmentation approaches provide solutions to such issues. Particularly, the Binary Partition Tree (BPT) is a hierarchical data-structure modeling an image content at different scales. It is built in a mono-feature way (i.e. one image, one metric) by merging progressively similar connected regions. However, the metric has to be carefully thought by the user and the handling of several images is generally dealt with by gathering multiple information provided by various spectral bands into a single metric. Our first contribution is a generalized framework for the BPT construction in a multi-feature way. It relies on a strategy setting up a consensus between many metrics, allowing us to obtain a unified hierarchical segmentation space. Surprisingly, few works were devoted to the evaluation of hierarchical structures. Our second contribution is a framework for evaluating the quality of BPTs relying both on intrinsic and extrinsic quality analysis based on ground-truth examples. We also discuss about the use of this evaluation framework both for evaluating the quality of a given BPT and for determining which BPT should be built for a given application. Experiments using satellite images emphasize the relevance of the proposed frameworks in the context of image segmentation.
528

Segmentation of Cone Beam CT in Stereotactic Radiosurgery / Segmentering av Cone Beam CT I stereotaktisk radiokirurgi

Ashfaq, Awais January 2016 (has links)
C-arm Cone Beam CT (CBCT) systems – due to compact size, flexible geometry and low radiation exposure – inaugurated the era of on-board 3D image guidance in therapeutic and surgical procedures. Leksell Gamma Knife Icon by Elekta introduced an integrated CBCT system to determine patient position prior to surgical session, thus advancing to a paradigm shift in facilitating frameless stereotactic radiosurgeries. While CBCT offers a quick imaging facility with high spatial accuracy, the quantitative values tend to be distorted due to various physics based artifacts such as scatter, beam hardening and cone beam effect. Several 3D reconstruction algorithms targeting these artifacts involve an accurate and fast segmentation of craniofacial CBCT images into air, tissue and bone. The objective of the thesis is to investigate the performance of deep learning based convolutional neural networks (CNN) in relation to conventional image processing and machine learning algorithms in segmenting CBCT images. CBCT data for training and testing procedures was provided by Elekta. A framework of segmentation algorithms including multilevel automatic thresholding, fuzzy clustering, multilayer perceptron and CNN is developed and tested against pre-defined evaluation metrics carrying pixel-wise prediction accuracy, statistical tests and execution times among others. CNN has proven its ability to outperform other segmentation algorithms throughout the evaluation metrics except for execution times. Mean segmentation error for CNN is found to be 0.4% with a standard deviation of 0.07%, followed by fuzzy clustering with mean segmentation error of 0.8% and a standard deviation of 0.12%. CNN based segmentation takes 500s compared to multilevel thresholding which requires ~1s on similar sized CBCT image. The present work demonstrates the ability of CNN in handling artifacts and noise in CBCT images and maintaining a high semantic segmentation performance. However, further efforts targeting CNN execution speed are required to utilize the segmentation framework within real-time 3D reconstruction algorithms. / C-arm Cone Beam CT (CBCT) system har tack vare sitt kompakta format, flexibla geometri och låga strålningsdos startat en era av inbyggda 3D bildtagningssystem för styrning av terapeutiska och kirurgiska ingripanden. Elektas Leksell Gamma Knife Icon introducerade ett integrerat CBCT-system för att bestämma patientens position för operationer och på så sätt gå in i en paradigm av ramlös stereotaktisk strålkirurgi. Även om CBCT erbjuder snabb bildtagning med hög spatiel noggrannhet så tenderar de kvantitativa värdena att störas av olika artefakter som spridning, beam hardening och cone beam effekten. Ett flertal 3D rekonstruktionsalgorithmer som försöker reducera dessa artefakter kräver en noggrann och snabb segmentering av kraniofaciala CBCT-bilder i luft, mjukvävnad och ben. Målet med den här avhandlingen är att undersöka hur djupa neurala nätverk baserade på faltning (convolutional neural networks, CNN) presterar i jämförelse med konventionella bildbehandlings- och maskininlärningalgorithmer för segmentering av CBCT-bilder. CBCT-data för träning och testning tillhandahölls av Elekta. Ett ramverk för segmenteringsalgorithmer inklusive flernivåströskling (multilevel automatic thresholding), suddig klustring (fuzzy clustering), flerlagersperceptroner (multilayer perceptron) och CNN utvecklas och testas mot fördefinerade utvärderingskriterier som pixelvis noggrannhet, statistiska tester och körtid. CNN presterade bäst i alla metriker förutom körtid. Det genomsnittliga segmenteringsfelet för CNN var 0.4% med en standardavvikelse på 0.07%, följt av suddig klustring med ett medelfel på 0.8% och en standardavvikelse på 0.12%. CNN kräver 500 sekunder jämfört med ungefär 1 sekund för den snabbaste algorithmen, flernivåströskling på lika stora CBCT-volymer. Arbetet visar CNNs förmåga att handera artefakter och brus i CBCT-bilder och bibehålla en högkvalitativ semantisk segmentering. Vidare arbete behövs dock för att förbättra presetandan hos algorithmen för att metoden ska vara applicerbar i realtidsrekonstruktionsalgorithmer.
529

Differences in tumor volume for treated glioblastoma patients examined with 18F-fluorothymidine PET and contrast-enhanced MRI / Differentiering av glioblastompatienter med avseende på tumörvolym från undersökningar med 18F-fluorothymidine PET och kontrastförstärkt MR

Hedman, Karolina January 2020 (has links)
Background: Glioblastoma (GBM) is the most common and malignant primary brain tumor. It is a rapidly progressing tumor that infiltrates the adjacent healthy brain tissue and is difficult to treat. Despite modern treatment including surgical resection followed by radiochemotherapy and adjuvant chemotherapy, the outcome remains poor. The median overall survival is 10-12 months. Neuroimaging is the most important diagnostic tool in the assessment of GBMs and the current imaging standard is contrast-enhanced magnetic resonance imaging (MRI). Positron emission tomography (PET) has been recommended as a complementary imaging modality. PET provides additional information to MRI, in biological behavior and aggressiveness of the tumor. This study aims to investigate if the combination of PET and MRI can improve the diagnostic assessment of these tumors. Patients and methods: In this study, 22 patients fulfilled the inclusion criteria, diagnosed with GBM, and participated in all four 18F-fluorothymidine (FLT)-PET/MR examinations. FLT-PET/MR examinations were performed preoperative (baseline), before the start of the oncological therapy, at two and six weeks into therapy. Optimization of an adaptive thresholding algorithm, a batch processing pipeline, and image feature extraction algorithms were developed and implemented in MATLAB and the analyzing tool imlook4d. Results: There was a significant difference in radiochemotherapy treatment response between long-term and short-term survivors’ tumor volume in MRI (p<0.05), and marginally significant (p<0.10) for maximum standard uptake value (SUVmax), PET tumor volume, and total lesion activity (TLA). Preoperative short-term survivors had on average larger tumor volume, higher SUV, and total lesion activity (TLA). The overall trend seen was that long-term survivors had a better treatment response in both MRI and PET than short-term survivors.  During radiochemotherapy, long-term survivors displayed shrinking MR tumor volume after two weeks, and almost no remaining tumor volume was left after six weeks; the short-term survivors display marginal tumor volume reduction during radiochemotherapy. In PET, long-term survivors mean tumor volumes start to decrease two weeks into radiochemotherapy. Short-term survivors do not show any PET volume reduction two and six weeks into radiochemotherapy. For patients with more or less than 200 days progression-free survival, PET volume and TLA were significantly different, and MR volume only marginally significant, suggesting that PET possibly could have added value. Conclusion: The combination of PET and MRI can be used to predict radiochemotherapy response between two and six weeks, predicting overall survival and progression-free survival using MR and PET volume, SUVmax, and TLA. This study is limited by small sample size and further research with greater number of participants is recommended.
530

Quantitative follow-up of pulmonary diseases using deep learning models / Suivi quantitatif de pathologies pulmonaires à base de modèles d'apprentissage profond

Tarando, Sebastian Roberto 16 May 2018 (has links)
Les pathologies infiltrantes diffuses recensent un large groupe de désordres pulmonaires et nécessitent un suivi régulier en imagerie tomodensitométrique (TDM). Une évaluation quantitative est nécessaire pour établir la progression (régionale) de la maladie et/ou l’impact thérapeutique. Cela implique le développement d’outils automatiques de diagnostic assisté par ordinateur (DAO) pour la segmentation du tissu pathologique dans les images TDM, problème adressé comme classification de texture. Traditionnellement, une telle classification repose sur une analyse des caractéristiques texturales 2D dans les images TDM axiales selon des critères définis par l’utilisateur. Récemment, des techniques d’intelligence artificielle fondées sur l’apprentissage profond, notamment les réseaux neuronaux convolutionnels (CNN), ont démontré des performances meilleures pour résoudre des tâches visuelles. Toutefois, pour les architectures CNN « classiques » il a été prouvé que les performances étaient moins bonnes en classification de texture par rapport à la reconnaissance d’objets, en raison de la dimensionnalité intrinsèque élevée des données texturales. Dans ce contexte, ce travail propose un système automatique pour l’analyse quantitative des pathologies infiltrantes diffuses du poumon fondé sur une architecture CNN en cascade (conçue spécialement pour l’analyse de texture) et sur un prétraitement spécifique des données d’entrée par filtrage localement connexe (permettant d’atténuer l’intensité des vaisseaux pulmonaires et d’augmenter ainsi le contraste des régions pathologiques). La classification, s’appliquant à l’ensemble du volume pulmonaire, atteint une précision moyenne de 84% (75.8% pour le tissu normal, 90% pour l’emphysème et la fibrose, 81.5% pour le verre dépoli) / Infiltrative lung diseases (ILDs) enclose a large group of irreversible lung disorders which require regular follow-up with computed tomography (CT) imaging. A quantitative assessment is mandatory to establish the (regional) disease progression and/or the therapeutic impact. This implies the development of automated computer-aided diagnosis (CAD) tools for pathological lung tissue segmentation, problem addressed as pixel-based texture classification. Traditionally, such classification relies on a two-dimensional analysis of axial CT images by means of handcrafted features. Recently, the use of deep learning techniques, especially Convolutional Neural Networks (CNNs) for visual tasks, has shown great improvements with respect to handcrafted heuristics-based methods. However, it has been demonstrated the limitations of "classic" CNN architectures when applied to texture-based datasets, due to their inherently higher dimension compared to handwritten digits or other object recognition datasets, implying the need of redesigning the network or enriching the system to learn meaningful textural features from input data. This work addresses an automated quantitative assessment of different disorders based on lung texture classification. The proposed approach exploits a cascade of CNNs (specially redesigned for texture categorization) for a hierarchical classification and a specific preprocessing of input data based on locally connected filtering (applied to the lung images to attenuate the vessel densities while preserving high opacities related to pathologies). The classification targeting the whole lung parenchyma achieves an average of 84% accuracy (75.8% for normal, 90% for emphysema and fibrosis, 81.5% for ground glass)

Page generated in 0.1363 seconds