• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 82
  • 58
  • 25
  • 17
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 588
  • 588
  • 153
  • 116
  • 107
  • 96
  • 85
  • 84
  • 81
  • 80
  • 74
  • 72
  • 70
  • 69
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Volné algebraické struktury a jejich využití pro segmentaci digitálního obrazu / Free algebraic structures and their application for segmentation of a digital image

Čambalová, Kateřina January 2015 (has links)
The thesis covers methods for image segmentation. Fuzzy segmentation is based on the thresholding method. This is generalized to accept multiple criteria. The whole process is mathematically based on the free algebra theory. Free distributive lattice is created from poset of elements based on image properties and the lattice members are represented by terms used by the threshoding. Possible segmentation results compose the equivalence classes distribution. The thesis also contains description of resulting algorithms and methods for their optimization. Also the method of area subtracting is introduced.
572

Segmentace obrazu pomocí neuronové sítě / Neural Network Based Image Segmentation

Jamborová, Soňa January 2011 (has links)
This work is about suggestion of the software for neural network based image segmentation. It defines basic terms for this topics. It is focusing mainly at preperation imaging information for image segmentation using neural network. It describes and compares different aproaches for image segmentation.
573

Nouveaux modèles de chemins minimaux pour l'extraction de structures tubulaires et la segmentation d'images / New Minimal Path Model for Tubular Extraction and Image Segmentation

Chen, Da 27 September 2016 (has links)
Dans les domaines de l’imagerie médicale et de la vision par ordinateur, la segmentation joue un rôle crucial dans le but d’extraire les composantes intéressantes d’une image ou d’une séquence d’images. Elle est à l’intermédiaire entre le traitement d’images de bas niveau et les applications cliniques et celles de la vision par ordinateur de haut niveau. Ces applications de haut niveau peuvent inclure le diagnostic, la planification de la thérapie, la détection et la reconnaissance d'objet, etc. Parmi les méthodes de segmentation existantes, les courbes géodésiques minimales possèdent des avantages théoriques et pratiques importants tels que le minimum global de l’énergie géodésique et la méthode bien connue de Fast Marching pour obtenir une solution numérique. Dans cette thèse, nous nous concentrons sur les méthodes géodésiques basées sur l’équation aux dérivées partielles, l’équation Eikonale, afin d’étudier des méthodes précises, rapides et robustes, pour l’extraction de structures tubulaires et la segmentation d’image, en développant diverses métriques géodésiques locales pour des applications cliniques et la segmentation d’images en général. / In the fields of medical imaging and computer vision, segmentation plays a crucial role with the goal of separating the interesting components from one image or a sequence of image frames. It bridges the gaps between the low-level image processing and high level clinical and computer vision applications. Among the existing segmentation methods, minimal geodesics have important theoretical and practical advantages such as the global minimum of the geodesic energy and the well-established fast marching method for numerical solution. In this thesis, we focus on the Eikonal partial differential equation based geodesic methods to investigate accurate, fast and robust tubular structure extraction and image segmentation methods, by developing various local geodesic metrics for types of clinical and segmentation tasks. This thesis aims to applying different geodesic metrics based on the Eikonal framework to solve different image segmentation problems especially for tubularity segmentation and region-based active contours models, by making use of more information from the image feature and prior clinical knowledges. The designed geodesic metrics basically take advantages of the geodesic orientation or anisotropy, the image feature consistency, the geodesic curvature and the geodesic asymmetry property to deal with various difficulties suffered by the classical minimal geodesic models and the active contours models. The main contributions of this thesis lie at the deep study of the various geodesic metrics and their applications in medical imaging and image segmentation. Experiments on medical images and nature images show the effectiveness of the presented contributions.
574

Improving Brain Tumor Segmentation using synthetic images from GANs

Nijhawan, Aashana January 2021 (has links)
Artificial intelligence (AI) has been seeing a great amount of hype around it for a few years but more so now in the field of diagnostic medical imaging. AI-based diagnoses have shown improvements in detecting the smallest abnormalities present in tumors and lesions. This can tremendously help public healthcare. There is a large amount of data present in the field of biomedical imaging with the hospitals but only a small amount is available for the use of research due to data and privacy protection. The task of manually segmenting tumors in this magnetic resonance imaging (MRI) can be quite expensive and time taking. This segmentation and classification would need high precision which is usually performed by medical experts that follow clinical medical standards. Due to this small amount of data when used with machine learning models, the trained models tend to overfit. With advancing deep learning techniques it is possible to generate images using Generative Adversarial Networks (GANs). GANs has garnered a heap of attention towards itself for its power to produce realistic-looking images, videos, and audios. This thesis aims to use the synthetic images generated by progressive growing GANs (PGGAN) along with real images to perform segmentation on brain tumor MRI. The idea is to investigate whether the addition of this synthetic data improves the segmentation significantly or not. To analyze the quality of the images produced by the PGGAN, Multi-scale Similarity Index Measure (MS-SSIM) and Sliced Wasserstein Distance (SWD) are recorded. To exam-ine the segmentation performance, Dice Similarity Coefficient (DSC) and accuracy scores are observed. To inspect if the improved performance by synthetic images is significant or not, a parametric paired t-test and non-parametric permutation test are used. It could be seen that the addition of synthetic images with real images is significant for most cases in comparison to using only real images. However, this addition of synthetic images makes the model uncertain. The models’ robustness is tested using training-free uncertainty estimation of neural networks.
575

Active geometric model : multi-compartment model-based segmentation & registration

Mukherjee, Prateep 26 August 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / We present a novel, variational and statistical approach for model-based segmentation. Our model generalizes the Chan-Vese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the Multi-Compartment Distance Functions or mcdf. Our proposed framework for segmentation is two-fold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmark-free automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other state-of-the-art segmentation methods.
576

Morphometric analysis of hippocampal subfields : segmentation, quantification and surface modeling

Cong, Shan January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Object segmentation, quantification, and shape modeling are important areas inmedical image processing. By combining these techniques, researchers can find valuableways to extract and represent details on user-desired structures, which can functionas the base for subsequent analyses such as feature classification, regression, and prediction. This thesis presents a new framework for building a three-dimensional (3D) hippocampal atlas model with subfield information mapped onto its surface, with which hippocampal surface registration can be done, and the comparison and analysis can be facilitated and easily visualized. This framework combines three powerful tools for automatic subcortical segmentation and 3D surface modeling. Freesurfer and Functional magnetic resonance imaging of the brain's Integrated Registration and Segmentation Tool (FIRST) are employed for hippocampal segmentation and quantification, while SPherical HARMonics (SPHARM) is employed for parametric surface modeling. This pipeline is shown to be effective in creating a hippocampal surface atlas using the Alzheimer's Disease Neuroimaging Initiative Grand Opportunity and phase 2 (ADNI GO/2) dataset. Intra-class Correlation Coefficients (ICCs) are calculated for evaluating the reliability of the extracted hippocampal subfields. The complex folding anatomy of the hippocampus offers many analytical challenges, especially when informative hippocampal subfields are usually ignored in detailed morphometric studies. Thus, current research results are inadequate to accurately characterize hippocampal morphometry and effectively identify hippocampal structural changes related to different conditions. To address this challenge, one contribution of this study is to model the hippocampal surface using a parametric spherical harmonic model, which is a Fourier descriptor for general a 3D surface. The second contribution of this study is to extend hippocampal studies by incorporating valuable hippocampal subfield information. Based on the subfield distributions, a surface atlas is created for both left and right hippocampi. The third contribution is achieved by calculating Fourier coefficients in the parametric space. Based on the coefficient values and user-desired degrees, a pair of averaged hippocampal surface atlas models can be reconstructed. These contributions lay a solid foundation to facilitate a more accurate, subfield-guided morphometric analysis of the hippocampus and have the potential to reveal subtle hippocampal structural damage associated.
577

Superpixels and their Application for Visual Place Recognition in Changing Environments

Neubert, Peer 01 December 2015 (has links)
Superpixels are the results of an image oversegmentation. They are an established intermediate level image representation and used for various applications including object detection, 3d reconstruction and semantic segmentation. While there are various approaches to create such segmentations, there is a lack of knowledge about their properties. In particular, there are contradicting results published in the literature. This thesis identifies segmentation quality, stability, compactness and runtime to be important properties of superpixel segmentation algorithms. While for some of these properties there are established evaluation methodologies available, this is not the case for segmentation stability and compactness. Therefore, this thesis presents two novel metrics for their evaluation based on ground truth optical flow. These two metrics are used together with other novel and existing measures to create a standardized benchmark for superpixel algorithms. This benchmark is used for extensive comparison of available algorithms. The evaluation results motivate two novel segmentation algorithms that better balance trade-offs of existing algorithms: The proposed Preemptive SLIC algorithm incorporates a local preemption criterion in the established SLIC algorithm and saves about 80 % of the runtime. The proposed Compact Watershed algorithm combines Seeded Watershed segmentation with compactness constraints to create regularly shaped, compact superpixels at the even higher speed of the plain watershed transformation. Operating autonomous systems over the course of days, weeks or months, based on visual navigation, requires repeated recognition of places despite severe appearance changes as they are for example induced by illumination changes, day-night cycles, changing weather or seasons - a severe problem for existing methods. Therefore, the second part of this thesis presents two novel approaches that incorporate superpixel segmentations in place recognition in changing environments. The first novel approach is the learning of systematic appearance changes. Instead of matching images between, for example, summer and winter directly, an additional prediction step is proposed. Based on superpixel vocabularies, a predicted image is generated that shows, how the summer scene could look like in winter or vice versa. The presented results show that, if certain assumptions on the appearance changes and the available training data are met, existing holistic place recognition approaches can benefit from this additional prediction step. Holistic approaches to place recognition are known to fail in presence of viewpoint changes. Therefore, this thesis presents a new place recognition system based on local landmarks and Star-Hough. Star-Hough is a novel approach to incorporate the spatial arrangement of local image features in the computation of image similarities. It is based on star graph models and Hough voting and particularly suited for local features with low spatial precision and high outlier rates as they are expected in the presence of appearance changes. The novel landmarks are a combination of local region detectors and descriptors based on convolutional neural networks. This thesis presents and evaluates several new approaches to incorporate superpixel segmentations in local region detection. While the proposed system can be used with different types of local regions, in particular the combination with regions obtained from the novel multiscale superpixel grid shows to perform superior to the state of the art methods - a promising basis for practical applications.
578

EXPANDING THE AUTONOMOUS SURFACE VEHICLE NAVIGATION PARADIGM THROUGH INLAND WATERWAY ROBOTIC DEPLOYMENT

Reeve David Lambert (13113279) 19 July 2022 (has links)
<p>This thesis presents solutions to some of the problems facing Autonomous Surface Vehicle (ASV) deployments in inland waterways through the development of navigational and control systems. Fluvial systems are one of the hardest inland waterways to navigate and are thus used as a use-case for system development. The systems are built to reduce the reliance on a-prioris during ASV operation. This is crucial for exceptionally dynamic environments such as fluvial bodies of water that have poorly defined routes and edges, can change course in short time spans, carry away and deposit obstacles, and expose or cover shoals and man-made structures as their water level changes. While navigation of fluvial systems is exceptionally difficult potential autonomous data collection can aid in important scientific missions in under studied environments.</p> <p><br></p> <p>The work has four contributions targeting solutions to four fundamental problems present in fluvial system navigation and control. To sense the course of fluvial systems for navigable path determination a fluvial segmentation study is done and a novel dataset detailed. To enable rapid path computations and augmentations in a fast moving environment a Dubins path generator and augmentation algorithm is presented ans is used in conjunction with an Integral Line-Of-Sight (ILOS) path following method. To rapidly avoid unseen/undetected obstacles present in fluvial environments a Deep Reinforcement Learning (DRL) agent is built and tested across domains to create dynamic local paths that can be rapidly affixed to for collision avoidance. Finally, a custom low-cost and deployable ASV, BREAM (Boat for Robotic Engineering and Applied Machine-Learning), capable of operating in fluvial environments is presented along with an autonomy package used in providing base level sensing and autonomy processing capability to varying platforms.</p> <p><br></p> <p>Each of these contributions form a part of a larger documented Fluvial Navigation Control Architecture (FNCA) that is proposed as a way to aid in a-priori free navigation of fluvial waterways. The architecture relates the navigational structures into high, mid, and low-level controller Guidance and Navigational Control (GNC) layers that are designed to increase cross vehicle and domain deployments. Each component of the architecture is documented, tested, and its application to the control architecture as a whole is reported.</p>
579

ENERGY EFFICIENT EDGE INFERENCE SYSTEMS

Soumendu Kumar Ghosh (14060094) 07 August 2023 (has links)
<p>Deep Learning (DL)-based edge intelligence has garnered significant attention in recent years due to the rapid proliferation of the Internet of Things (IoT), embedded, and intelligent systems, collectively termed edge devices. Sensor data streams acquired by these edge devices are processed by a Deep Neural Network (DNN) application that runs on the device itself or in the cloud. However, the high computational complexity and energy consumption of processing DNNs often limit their deployment on these edge inference systems due to limited compute, memory and energy resources. Furthermore, high costs, strict application latency demands, data privacy, security constraints, and the absence of reliable edge-cloud network connectivity heavily impact edge application efficiency in the case of cloud-assisted DNN inference. Inevitably, performance and energy efficiency are of utmost importance in these edge inference systems, aside from the accuracy of the application. To facilitate energy- efficient edge inference systems running computationally complex DNNs, this dissertation makes three key contributions.</p> <p><br></p> <p>The first contribution adopts a full-system approach to Approximate Computing, a design paradigm that trades off a small degradation in application quality for significant energy savings. Within this context, we present the foundational concepts of AxIS, the first approximate edge inference system that jointly optimizes the constituent subsystems leading to substantial energy benefits compared to optimization of the individual subsystem. To illustrate the efficacy of this approach, we demonstrate multiple versions of an approximate smart camera system that executes various DNN-based unimodal computer vision applications, showcasing how the sensor, memory, compute, and communication subsystems can all be synergistically approximated for energy-efficient edge inference.</p> <p><br></p> <p>Building on this foundation, the second contribution extends AxIS to multimodal AI, harnessing data from multiple sensor modalities to impart human-like cognitive and perceptual abilities to edge devices. By exploring optimization techniques for multiple sensor modalities and subsystems, this research reveals the impact of synergistic modality-aware optimizations on system-level accuracy-efficiency (AE) trade-offs, culminating in the introduction of SysteMMX, the first AE scalable cognitive system that allows efficient multimodal inference at the edge. To illustrate the practicality and effectiveness of this approach, we present an in-depth case study centered around a multimodal system that leverages RGB and Depth sensor modalities for image segmentation tasks.</p> <p><br></p> <p>The final contribution focuses on optimizing the performance of an edge-cloud collaborative inference system through intelligent DNN partitioning and computation offloading. We delve into the realm of distributed inference across edge devices and cloud servers, unveiling the challenges associated with finding the optimal partitioning point in DNNs for significant inference latency speedup. To address these challenges, we introduce PArtNNer, a platform-agnostic and adaptive DNN partitioning framework capable of dynamically adapting to changes in communication bandwidth and cloud server load. Unlike existing approaches, PArtNNer does not require pre-characterization of underlying edge computing platforms, making it a versatile and efficient solution for real-world edge-cloud scenarios.</p> <p><br></p> <p>Overall, this thesis provides novel insights, innovative techniques, and intelligent solutions to enable energy-efficient AI at the edge. The contributions presented herein serve as a solid foundation for future researchers to build upon, driving innovation and shaping the trajectory of research in edge AI.</p>
580

Applications of Deep Leaning on Cardiac MRI: Design Approaches for a Computer Aided Diagnosis

Pérez Pelegrí, Manuel 27 April 2023 (has links)
[ES] Las enfermedades cardiovasculares son una de las causas más predominantes de muerte y comorbilidad en los países desarrollados, por ello se han realizado grandes inversiones en las últimas décadas para producir herramientas de diagnóstico y aplicaciones de tratamiento de enfermedades cardíacas de alta calidad. Una de las mejores herramientas de diagnóstico para caracterizar el corazón ha sido la imagen por resonancia magnética (IRM) gracias a sus capacidades de alta resolución tanto en la dimensión espacial como temporal, lo que permite generar imágenes dinámicas del corazón para un diagnóstico preciso. Las dimensiones del ventrículo izquierdo y la fracción de eyección derivada de ellos son los predictores más potentes de morbilidad y mortalidad cardiaca y su cuantificación tiene connotaciones importantes para el manejo y tratamiento de los pacientes. De esta forma, la IRM cardiaca es la técnica de imagen más exacta para la valoración del ventrículo izquierdo. Para obtener un diagnóstico preciso y rápido, se necesita un cálculo fiable de biomarcadores basados en imágenes a través de software de procesamiento de imágenes. Hoy en día la mayoría de las herramientas empleadas se basan en sistemas semiautomáticos de Diagnóstico Asistido por Computador (CAD) que requieren que el experto clínico interactúe con él, consumiendo un tiempo valioso de los profesionales cuyo objetivo debería ser únicamente interpretar los resultados. Un cambio de paradigma está comenzando a entrar en el sector médico donde los sistemas CAD completamente automáticos no requieren ningún tipo de interacción con el usuario. Estos sistemas están diseñados para calcular los biomarcadores necesarios para un diagnóstico correcto sin afectar el flujo de trabajo natural del médico y pueden iniciar sus cálculos en el momento en que se guarda una imagen en el sistema de archivo informático del hospital. Los sistemas CAD automáticos, aunque se consideran uno de los grandes avances en el mundo de la radiología, son extremadamente difíciles de desarrollar y dependen de tecnologías basadas en inteligencia artificial (IA) para alcanzar estándares médicos. En este contexto, el aprendizaje profundo (DL) ha surgido en la última década como la tecnología más exitosa para abordar este problema. Más específicamente, las redes neuronales convolucionales (CNN) han sido una de las técnicas más exitosas y estudiadas para el análisis de imágenes, incluidas las imágenes médicas. En este trabajo describimos las principales aplicaciones de CNN para sistemas CAD completamente automáticos para ayudar en la rutina de diagnóstico clínico mediante resonancia magnética cardíaca. El trabajo cubre los puntos principales a tener en cuenta para desarrollar tales sistemas y presenta diferentes resultados de alto impacto dentro del uso de CNN para resonancia magnética cardíaca, separados en tres proyectos diferentes que cubren su aplicación en la rutina clínica de diagnóstico, cubriendo los problemas de la segmentación, estimación automática de biomarcadores con explicabilidad y la detección de eventos. El trabajo completo presentado describe enfoques novedosos y de alto impacto para aplicar CNN al análisis de resonancia magnética cardíaca. El trabajo proporciona varios hallazgos clave, permitiendo varias formas de integración de esta reciente y creciente tecnología en sistemas CAD completamente automáticos que pueden producir resultados altamente precisos, rápidos y confiables. Los resultados descritos mejorarán e impactarán positivamente el flujo de trabajo de los expertos clínicos en un futuro próximo. / [CA] Les malalties cardiovasculars són una de les causes de mort i comorbiditat més predominants als països desenvolupats, s'han fet grans inversions en les últimes dècades per tal de produir eines de diagnòstic d'alta qualitat i aplicacions de tractament de malalties cardíaques. Una de les tècniques millor provades per caracteritzar el cor ha estat la imatge per ressonància magnètica (IRM), gràcies a les seves capacitats d'alta resolució tant en dimensions espacials com temporals, que permeten generar imatges dinàmiques del cor per a un diagnòstic precís. Les dimensions del ventricle esquerre i la fracció d'ejecció que se'n deriva són els predictors més potents de morbiditat i mortalitat cardíaca i la seva quantificació té connotacions importants per al maneig i tractament dels pacients. D'aquesta manera, la IRM cardíaca és la tècnica d'imatge més exacta per a la valoració del ventricle esquerre. Per obtenir un diagnòstic precís i ràpid, es necessita un càlcul fiable de biomarcadors basat en imatges mitjançant un programa de processament d'imatges. Actualment, la majoria de les ferramentes emprades es basen en sistemes semiautomàtics de Diagnòstic Assistit per ordinador (CAD) que requereixen que l'expert clínic interaccioni amb ell, consumint un temps valuós dels professionals, l'objectiu dels quals només hauria de ser la interpretació dels resultats. S'està començant a introduir un canvi de paradigma al sector mèdic on els sistemes CAD totalment automàtics no requereixen cap tipus d'interacció amb l'usuari. Aquests sistemes estan dissenyats per calcular els biomarcadors necessaris per a un diagnòstic correcte sense afectar el flux de treball natural del metge i poden iniciar els seus càlculs en el moment en què es deixa la imatge dins del sistema d'arxius hospitalari. Els sistemes CAD automàtics, tot i ser molt considerats com un dels propers grans avanços en el món de la radiologia, són extremadament difícils de desenvolupar i depenen de les tecnologies d'Intel·ligència Artificial (IA) per assolir els estàndards mèdics. En aquest context, l'aprenentatge profund (DL) ha sorgit durant l'última dècada com la tecnologia amb més èxit per abordar aquest problema. Més concretament, les xarxes neuronals convolucionals (CNN) han estat una de les tècniques més utilitzades i estudiades per a l'anàlisi d'imatges, inclosa la imatge mèdica. En aquest treball es descriuen les principals aplicacions de CNN per a sistemes CAD totalment automàtics per ajudar en la rutina de diagnòstic clínic mitjançant ressonància magnètica cardíaca. El treball recull els principals punts a tenir en compte per desenvolupar aquest tipus de sistemes i presenta diferents resultats d'impacte en l'ús de CNN a la ressonància magnètica cardíaca, tots separats en tres projectes principals diferents, cobrint els problemes de la segmentació, estimació automàtica de *biomarcadores amb *explicabilidad i la detecció d'esdeveniments. El treball complet presentat descriu enfocaments nous i potents per aplicar CNN a l'anàlisi de ressonància magnètica cardíaca. El treball proporciona diversos descobriments clau, que permeten la integració de diverses maneres d'aquesta tecnologia nova però en constant creixement en sistemes CAD totalment automàtics que podrien produir resultats altament precisos, ràpids i fiables. Els resultats descrits milloraran i afectaran considerablement el flux de treball dels experts clínics en un futur proper. / [EN] Cardiovascular diseases are one of the most predominant causes of death and comorbidity in developed countries, as such heavy investments have been done in recent decades in order to produce high quality diagnosis tools and treatment applications for cardiac diseases. One of the best proven tools to characterize the heart has been magnetic resonance imaging (MRI), thanks to its high-resolution capabilities in both spatial and temporal dimensions, allowing to generate dynamic imaging of the heart that enable accurate diagnosis. The dimensions of the left ventricle and the ejection fraction derived from them are the most powerful predictors of cardiac morbidity and mortality, and their quantification has important connotations for the management and treatment of patients. Thus, cardiac MRI is the most accurate imaging technique for left ventricular assessment. In order to get an accurate and fast diagnosis, reliable image-based biomarker computation through image processing software is needed. Nowadays most of the employed tools rely in semi-automatic Computer-Aided Diagnosis (CAD) systems that require the clinical expert to interact with it, consuming valuable time from the professionals whose aim should only be at interpreting results. A paradigm shift is starting to get into the medical sector where fully automatic CAD systems do not require any kind of user interaction. These systems are designed to compute any required biomarkers for a correct diagnosis without impacting the physician natural workflow and can start their computations the moment an image is saved within a hospital archive system. Automatic CAD systems, although being highly regarded as one of next big advances in the radiology world, are extremely difficult to develop and rely on Artificial Intelligence (AI) technologies in order to reach medical standards. In this context, Deep learning (DL) has emerged in the past decade as the most successful technology to address this problem. More specifically, convolutional neural networks (CNN) have been one of the most successful and studied techniques for image analysis, including medical imaging. In this work we describe the main applications of CNN for fully automatic CAD systems to help in the clinical diagnostics routine by means of cardiac MRI. The work covers the main points to take into account in order to develop such systems and presents different impactful results within the use of CNN to cardiac MRI, all separated in three different main projects covering the segmentation, automatic biomarker estimation with explainability and event detection problems. The full work presented describes novel and powerful approaches to apply CNN to cardiac MRI analysis. The work provides several key findings, enabling the integration in several ways of this novel but non-stop growing technology into fully automatic CAD systems that could produce highly accurate, fast and reliable results. The results described will greatly improve and impact the workflow of the clinical experts in the near future. / Pérez Pelegrí, M. (2023). Applications of Deep Leaning on Cardiac MRI: Design Approaches for a Computer Aided Diagnosis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/192988

Page generated in 0.1083 seconds