• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 4
  • Tagged with
  • 31
  • 20
  • 17
  • 16
  • 15
  • 15
  • 12
  • 11
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

ARTIFICIAL INTELLIGENCE APPLICATIONS FOR IDENTIFYING KEY FEATURES TO REDUCE BUILDING ENERGY CONSUMPTION

Lakmini Rangana Senarathne (16642119) 07 August 2023 (has links)
<p>The International Energy Agency (IEA) estimates that residential and commercial buildings consume 40% of global energy and emit 24% of CO2. A building's design parameters and location significantly impact its energy usage. Adjusting the building parameters and features in an optimum way helps to reduce energy usage and to build energy-efficient buildings. Hence, analyzing the impact of influencing factors is critical to reduce building energy usage.</p> <p>Towards this, artificial intelligence applications, such as Explainable Artificial Intelligence (XAI) and machine learning (ML) identified the key building features to reduce building energy. This is done by analyzing the efficiencies of various building features that impact building energy consumption. For this, the relative importance of input features impacting commercial building energy usage is investigated. Also analyzed is the parametric analysis of the impact of input variables on residential building energy usage. Furthermore, the dependencies and relationships between the design variables of residential buildings were examined. Finally, the study analyzed the impact of location features on cooling energy usage in commercial buildings.</p> <p>For the purpose of energy consumption data analysis, three datasets, named the Commercial Building Energy Consumption Survey (CBECS) datasets gathered in 2012 and 2018, University of California Irvine (UCI) energy efficiency dataset, and Commercial Load Data (CLD) were utilized. For this, Python and WEKA were used. Random Forest, Linear Regression, Bayesian Networks, and Logistic Regression predicted energy consumption using datasets. Moreover, statistical tests, such as the Wilcoxon-rank sum test were analyzed for the significant differences between specific datasets. Shapash, a Python library, created the feature important graphs.</p> <p>The results indicated that cooling degree days are the most important feature in predicting cooling load with contribution values 34.29% (2018) and 19.68% (2012). Also, analyzing the impact of building parameters on energy usage indicated that 50% of overall height reduction achieves a reduction of heating load by 64.56% and cooling load by 57.47%. Also, the Wilcoxon-rank sum test indicated that the location of the building also impacts energy consumption with a 0.05 error margin. The proposed analysis is beneficial for real-world applications and energy-efficient building construction.</p>
22

Comparison of Logistic Regression and an Explained Random Forest in the Domain of Creditworthiness Assessment

Ankaräng, Marcus, Kristiansson, Jakob January 2021 (has links)
As the use of AI in society is developing, the requirement of explainable algorithms has increased. A challenge with many modern machine learning algorithms is that they, due to their often complex structures, lack the ability to produce human-interpretable explanations. Research within explainable AI has resulted in methods that can be applied on top of non- interpretable models to motivate their decision bases. The aim of this thesis is to compare an unexplained machine learning model used in combination with an explanatory method, and a model that is explainable through its inherent structure. Random forest was the unexplained model in question and the explanatory method was SHAP. The explainable model was logistic regression, which is explanatory through its feature weights. The comparison was conducted within the area of creditworthiness and was based on predictive performance and explainability. Furthermore, the thesis intends to use these models to investigate what characterizes loan applicants who are likely to default. The comparison showed that no model performed significantly better than the other in terms of predictive performance. Characteristics of bad loan applicants differed between the two algorithms. Three important aspects were the applicant’s age, where they lived and whether they had a residential phone. Regarding explainability, several advantages with SHAP were observed. With SHAP, explanations on both a local and a global level can be produced. Also, SHAP offers a way to take advantage of the high performance in many modern machine learning algorithms, and at the same time fulfil today’s increased requirement of transparency. / I takt med att AI används allt oftare för att fatta beslut i samhället, har kravet på förklarbarhet ökat. En utmaning med flera moderna maskininlärningsmodeller är att de, på grund av sina komplexa strukturer, sällan ger tillgång till mänskligt förståeliga motiveringar. Forskning inom förklarar AI har lett fram till metoder som kan appliceras ovanpå icke- förklarbara modeller för att tolka deras beslutsgrunder. Det här arbetet syftar till att jämföra en icke- förklarbar maskininlärningsmodell i kombination med en förklaringsmetod, och en modell som är förklarbar genom sin struktur. Den icke- förklarbara modellen var random forest och förklaringsmetoden som användes var SHAP. Den förklarbara modellen var logistisk regression, som är förklarande genom sina vikter. Jämförelsen utfördes inom området kreditvärdighet och grundades i prediktiv prestanda och förklarbarhet. Vidare användes dessa modeller för att undersöka vilka egenskaper som var kännetecknande för låntagare som inte förväntades kunna betala tillbaka sitt lån. Jämförelsen visade att ingen av de båda metoderna presterande signifikant mycket bättre än den andra sett till prediktiv prestanda. Kännetecknande särdrag för dåliga låntagare skiljde sig åt mellan metoderna. Tre viktiga aspekter var låntagarens °ålder, vart denna bodde och huruvida personen ägde en hemtelefon. Gällande förklarbarheten framträdde flera fördelar med SHAP, däribland möjligheten att kunna producera både lokala och globala förklaringar. Vidare konstaterades att SHAP gör det möjligt att dra fördel av den höga prestandan som många moderna maskininlärningsmetoder uppvisar och samtidigt uppfylla dagens ökade krav på transparens.
23

Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection

Hammarström, Tobias January 2020 (has links)
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifying cat and dog images that contained introduced artificial noise. The task was to assess the methods' capabilities to explain the importance of the introduced noise for the learnt model. The task was constructed as an exploratory step, with the future aim of using the methods on models diagnosing oral cancer. In addition to using the TCAV method as introduced by its authors, this study also utilizes the CAV-sensitivity to introduce and perform a sensitivity magnitude analysis. Both methods proved useful in discerning between the model’s two decision-making strategies based on either the animal or the noise. However, greater insight into the intricacies of said strategies is desired. Additionally, the methods provided a deeper understanding of the model’s learning, as the model did not seem to properly distinguish between the noise and the animal conceptually. The methods thus accentuated the limitations of the model, thereby increasing our trust in its abilities. In conclusion, the methods show promise regarding the task of detecting visually distinctive noise in images, which could extend to other distinctive features present in more complex problems. Consequently, more research should be conducted on applying these methods on more complex areas with specialized models and tasks, e.g. oral cancer.
24

Interpretation of Dimensionality Reduction with Supervised Proxies of User-defined Labels

Leoni, Cristian January 2021 (has links)
Research on Machine learning (ML) explainability has received a lot of focus in recent times. The interest, however, mostly focused on supervised models, while other ML fields have not had the same level of attention. Despite its usefulness in a variety of different fields, unsupervised learning explainability is still an open issue. In this paper, we present a Visual Analytics framework based on eXplainable AI (XAI) methods to support the interpretation of Dimensionality reduction methods. The framework provides the user with an interactive and iterative process to investigate and explain user-perceived patterns for a variety of DR methods by using XAI methods to explain a supervised method trained on the selected data. To evaluate the effectiveness of the proposed solution, we focus on two main aspects: the quality of the visualization and the quality of the explanation. This challenge is tackled using both quantitative and qualitative methods, and due to the lack of pre-existing test data, a new benchmark has been created. The quality of the visualization is established using a well-known survey-based methodology, while the quality of the explanation is evaluated using both case studies and a controlled experiment, where the generated explanation accuracy is evaluated on the proposed benchmark. The results show a strong capacity of our framework to generate accurate explanations, with an accuracy of 89% over the controlled experiment. The explanation generated for the two case studies yielded very similar results when compared with pre-existing, well-known literature on ground truths. Finally, the user experiment generated high quality overall scores for all assessed aspects of the visualization.
25

Exploring attribution methods explaining atrial fibrillation predictions from sinus ECGs : Attributions in Scale, Time and Frequency / Undersökning av attributionsmetoder för att förklara förmaksflimmerprediktioner från EKG:er i sinusrytm : Attribution i skala, tid och frekvens

Sörberg, Svante January 2021 (has links)
Deep Learning models are ubiquitous in machine learning. They offer state-of- the-art performance on tasks ranging from natural language processing to image classification. The drawback of these complex models is their black box nature. It is difficult for the end-user to understand how a model arrives at its prediction from the input. This is especially pertinent in domains such as medicine, where being able to trust a model is paramount. In this thesis, ways of explaining a model predicting paroxysmal atrial fibrillation from sinus electrocardiogram (ECG) data are explored. Building on the concept of feature attributions, the problem is approached from three distinct perspectives: time, scale, and frequency. Specifically, one method based on the Integrated Gradients framework and one method based on Shapley values are used. By perturbing the data, retraining the model, and evaluating the retrained model on the perturbed data, the degree of correspondence between the attributions and the meaningful information in the data is evaluated. Results indicate that the attributions in scale and frequency are somewhat consistent with the meaningful information in the data, while the attributions in time are not. The conclusion drawn from the results is that the task of predicting atrial fibrillation for the model in question becomes easier as the level of scale is increased slightly, and that high-frequency information is either not meaningful for the task of predicting atrial fibrillation, or that if it is, the model is unable to learn from it. / Djupinlärningsmodeller förekommer på många håll inom maskininlärning. De erbjuder bästa möjliga prestanda i olika domäner såsom datorlingvistik och bildklassificering. Nackdelen med dessa komplexa modeller är deras “svart låda”-egenskaper. Det är svårt för användaren att förstå hur en modell kommer fram till sin prediktion utifrån indatan. Detta är särskilt relevant i domäner såsom sjukvård, där tillit till modellen är avgörande. I denna uppsats utforskas sätt att förklara en modell som predikterar paroxysmalt förmaksflimmer från elektrokardiogram (EKG) som uppvisar normal sinusrytm. Med utgångspunkt i feature attribution (särdragsattribution) angrips problemet från tre olika perspektiv: tid, skala och frekvens. I synnerhet används en metod baserad på Integrated Gradients och en metod baserad på Shapley-värden. Genom att perturbera datan, träna om modellen, och utvärdera den omtränader modellen på den perturberade datan utvärderas graden av överensstämmelse mellan attributionerna och den meningsfulla informationen i datan. Resultaten visar att attributioner i skala- och frekvensdomänerna delvis stämmer överens med den meningsfulla informationen i datan, medan attributionerna i tidsdomänen inte gör det. Slutsatsen som dras utifrån resultaten är att uppgiften att prediktera förmaksflimmer blir enklare när skalnivån ökas något, samt att högre frekvenser antingen inte är betydelsefullt för att prediktera förmaksflimmer, eller att om det är det, så saknar modellen förmågan att lära sig detta.
26

Human-Centered Explainability Attributes In Ai-Powered Eco-Driving : Understanding Truck Drivers' Perspective

Gjona, Ermela January 2023 (has links)
The growing presence of algorithm-generated recommendations in AI-powered services highlights the importance of responsible systems that explain outputs in a human-understandable form, especially in an automotive context. Implementing explainability in recommendations of AI-powered eco-driving is important in ensuring that drivers understand the underlying reasoning behind the recommendations. Previous literature on explainable AI (XAI) has been primarily technological-centered, and only a few studies involve the end-user perspective. There is a lack of knowledge of drivers' needs and requirements for explainability in an AI-powered eco-driving context. This study addresses the attributes that make a “satisfactory” explanation, i,e., a satisfactory interface between humans and AI. This study uses scenario-based interviews to understand the explainability attributes that influence truck drivers' intention to use eco-driving recommendations. The study used thematic analysis to categorize seven attributes into context-dependent (Format, Completeness, Accuracy, Timeliness, Communication) and generic (Reliability, Feedback loop) categories. The study contributes context-dependent attributes along three design dimensions: Presentational, Content-related, and Temporal aspects of explainability. The findings of this study present an empirical foundation into end-users' explainability needs and provide valuable insights for UX and system designers in eliciting end-user requirements.
27

Explainable AI For Predictive Maintenance

Karlsson, Nellie, Bengtsson, My January 2022 (has links)
As the complexity of deep learning model increases, the transparency of the systems does the opposite. It may be hard to understand the predictions a deep learning model makes, but even harder to understand why these predictions are made. Using eXplainable AI (XAI), we can gain greater knowledge of how the model operates and how the input in which the model receives can change its predictions. In this thesis, we apply Integrated Gradients (IG), an XAI method primarily used on image data and on datasets containing tabular and time-series data. We also evaluate how the results of IG differ from various types of models and how the change of baseline can change the outcome. In these results, we observe that IG can be applied to both sequenced and nonsequenced data, with varying results. We can see that the gradient baseline does not affect the results of IG on models such as RNN, LSTM, and GRU, where the data contains time series, as much as it does for models like MLP with nonsequenced data. To confirm this, we also applied IG to SVM models, which gave the results that the choice of gradient baseline has a significant impact on the results of IG.
28

Towards gradient faithfulness and beyond

Buono, Vincenzo, Åkesson, Isak January 2023 (has links)
The riveting interplay of industrialization, informalization, and exponential technological growth of recent years has shifted the attention from classical machine learning techniques to more sophisticated deep learning approaches; yet its intrinsic black-box nature has been impeding its widespread adoption in transparency-critical operations. In this rapidly evolving landscape, where the symbiotic relationship between research and practical applications has never been more interwoven, the contribution of this paper is twofold: advancing gradient faithfulness of CAM methods and exploring new frontiers beyond it. In the first part, we theorize three novel gradient-based CAM formulations, aimed at replacing and superseding traditional Grad-CAM-based methods by tackling and addressing the intricately and persistent vanishing and saturating gradient problems. As a consequence, our work introduces novel enhancements to Grad-CAM that reshape the conventional gradient computation by incorporating a customized and adapted technique inspired by the well-established and provably Expected Gradients’ difference-from-reference approach. Our proposed techniques– Expected Grad-CAM, Expected Grad-CAM++and Guided Expected Grad-CAM– as they operate directly on the gradient computation, rather than the recombination of the weighing factors, are designed as a direct and seamless replacement for Grad-CAM and any posterior work built upon it. In the second part, we build on our prior proposition and devise a novel CAM method that produces both high-resolution and class-discriminative explanation without fusing other methods, while addressing the issues of both gradient and CAM methods altogether. Our last and most advanced proposition, Hyper Expected Grad-CAM, challenges the current state and formulation of visual explanation and faithfulness and produces a new type of hybrid saliencies that satisfy the notion of natural encoding and perceived resolution. By rethinking faithfulness and resolution is possible to generate saliencies which are more detailed, localized, and less noisy, but most importantly that are composed of only concepts that are encoded by the layerwise models’ understanding. Both contributions have been quantitatively and qualitatively compared and assessed in a 5 to 10 times larger evaluation study on the ILSVRC2012 dataset against nine of the most recent and performing CAM techniques across six metrics. Expected Grad-CAM outperformed not only the original formulation but also more advanced methods, resulting in the second-best explainer with an Ins-Del score of 0.56. Hyper Expected Grad-CAM provided remarkable results across each quantitative metric, yielding a 0.15 increase in insertion when compared to the highest-scoring explainer PolyCAM, totaling to an Ins-Del score of 0.72.
29

Is eXplainable AI suitable as a hypotheses generating tool for medical research? Comparing basic pathology annotation with heat maps to find out

Adlersson, Albert January 2023 (has links)
Hypothesis testing has long been a formal and standardized process. Hypothesis generation, on the other hand, remains largely informal. This thesis assess whether eXplainable AI (XAI) can aid in the standardization of hypothesis generation through its utilization as a hypothesis generating tool for medical research. We produce XAI heat maps for a Convolutional Neural Network (CNN) trained to classify Microsatellite Instability (MSI) in colon and gastric cancer with four different XAI methods: Guided Backpropagation, VarGrad, Grad-CAM and Sobol Attribution. We then compare these heat maps with pathology annotations in order to look for differences to turn into new hypotheses. Our CNN successfully generates non-random XAI heat maps whilst achieving a validation accuracy of 85% and a validation AUC of 93% – as compared to others who achieve a AUC of 87%. Our results conclude that Guided Backpropagation and VarGrad are better at explaining high-level image features whereas Grad-CAM and Sobol Attribution are better at explaining low-level ones. This makes the two groups of XAI methods good complements to each other. Images of Microsatellite Insta- bility (MSI) with high differentiation are more difficult to analyse regardless of which XAI is used, probably due to exhibiting less regularity. Regardless of this drawback, our assessment is that XAI can be used as a useful hypotheses generating tool for research in medicine. Our results indicate that our CNN utilizes the same features as our basic pathology annotations when classifying MSI – with some additional features of basic pathology missing – features which we successfully are able to generate new hypotheses with.
30

Applications of Deep Leaning on Cardiac MRI: Design Approaches for a Computer Aided Diagnosis

Pérez Pelegrí, Manuel 27 April 2023 (has links)
[ES] Las enfermedades cardiovasculares son una de las causas más predominantes de muerte y comorbilidad en los países desarrollados, por ello se han realizado grandes inversiones en las últimas décadas para producir herramientas de diagnóstico y aplicaciones de tratamiento de enfermedades cardíacas de alta calidad. Una de las mejores herramientas de diagnóstico para caracterizar el corazón ha sido la imagen por resonancia magnética (IRM) gracias a sus capacidades de alta resolución tanto en la dimensión espacial como temporal, lo que permite generar imágenes dinámicas del corazón para un diagnóstico preciso. Las dimensiones del ventrículo izquierdo y la fracción de eyección derivada de ellos son los predictores más potentes de morbilidad y mortalidad cardiaca y su cuantificación tiene connotaciones importantes para el manejo y tratamiento de los pacientes. De esta forma, la IRM cardiaca es la técnica de imagen más exacta para la valoración del ventrículo izquierdo. Para obtener un diagnóstico preciso y rápido, se necesita un cálculo fiable de biomarcadores basados en imágenes a través de software de procesamiento de imágenes. Hoy en día la mayoría de las herramientas empleadas se basan en sistemas semiautomáticos de Diagnóstico Asistido por Computador (CAD) que requieren que el experto clínico interactúe con él, consumiendo un tiempo valioso de los profesionales cuyo objetivo debería ser únicamente interpretar los resultados. Un cambio de paradigma está comenzando a entrar en el sector médico donde los sistemas CAD completamente automáticos no requieren ningún tipo de interacción con el usuario. Estos sistemas están diseñados para calcular los biomarcadores necesarios para un diagnóstico correcto sin afectar el flujo de trabajo natural del médico y pueden iniciar sus cálculos en el momento en que se guarda una imagen en el sistema de archivo informático del hospital. Los sistemas CAD automáticos, aunque se consideran uno de los grandes avances en el mundo de la radiología, son extremadamente difíciles de desarrollar y dependen de tecnologías basadas en inteligencia artificial (IA) para alcanzar estándares médicos. En este contexto, el aprendizaje profundo (DL) ha surgido en la última década como la tecnología más exitosa para abordar este problema. Más específicamente, las redes neuronales convolucionales (CNN) han sido una de las técnicas más exitosas y estudiadas para el análisis de imágenes, incluidas las imágenes médicas. En este trabajo describimos las principales aplicaciones de CNN para sistemas CAD completamente automáticos para ayudar en la rutina de diagnóstico clínico mediante resonancia magnética cardíaca. El trabajo cubre los puntos principales a tener en cuenta para desarrollar tales sistemas y presenta diferentes resultados de alto impacto dentro del uso de CNN para resonancia magnética cardíaca, separados en tres proyectos diferentes que cubren su aplicación en la rutina clínica de diagnóstico, cubriendo los problemas de la segmentación, estimación automática de biomarcadores con explicabilidad y la detección de eventos. El trabajo completo presentado describe enfoques novedosos y de alto impacto para aplicar CNN al análisis de resonancia magnética cardíaca. El trabajo proporciona varios hallazgos clave, permitiendo varias formas de integración de esta reciente y creciente tecnología en sistemas CAD completamente automáticos que pueden producir resultados altamente precisos, rápidos y confiables. Los resultados descritos mejorarán e impactarán positivamente el flujo de trabajo de los expertos clínicos en un futuro próximo. / [CA] Les malalties cardiovasculars són una de les causes de mort i comorbiditat més predominants als països desenvolupats, s'han fet grans inversions en les últimes dècades per tal de produir eines de diagnòstic d'alta qualitat i aplicacions de tractament de malalties cardíaques. Una de les tècniques millor provades per caracteritzar el cor ha estat la imatge per ressonància magnètica (IRM), gràcies a les seves capacitats d'alta resolució tant en dimensions espacials com temporals, que permeten generar imatges dinàmiques del cor per a un diagnòstic precís. Les dimensions del ventricle esquerre i la fracció d'ejecció que se'n deriva són els predictors més potents de morbiditat i mortalitat cardíaca i la seva quantificació té connotacions importants per al maneig i tractament dels pacients. D'aquesta manera, la IRM cardíaca és la tècnica d'imatge més exacta per a la valoració del ventricle esquerre. Per obtenir un diagnòstic precís i ràpid, es necessita un càlcul fiable de biomarcadors basat en imatges mitjançant un programa de processament d'imatges. Actualment, la majoria de les ferramentes emprades es basen en sistemes semiautomàtics de Diagnòstic Assistit per ordinador (CAD) que requereixen que l'expert clínic interaccioni amb ell, consumint un temps valuós dels professionals, l'objectiu dels quals només hauria de ser la interpretació dels resultats. S'està començant a introduir un canvi de paradigma al sector mèdic on els sistemes CAD totalment automàtics no requereixen cap tipus d'interacció amb l'usuari. Aquests sistemes estan dissenyats per calcular els biomarcadors necessaris per a un diagnòstic correcte sense afectar el flux de treball natural del metge i poden iniciar els seus càlculs en el moment en què es deixa la imatge dins del sistema d'arxius hospitalari. Els sistemes CAD automàtics, tot i ser molt considerats com un dels propers grans avanços en el món de la radiologia, són extremadament difícils de desenvolupar i depenen de les tecnologies d'Intel·ligència Artificial (IA) per assolir els estàndards mèdics. En aquest context, l'aprenentatge profund (DL) ha sorgit durant l'última dècada com la tecnologia amb més èxit per abordar aquest problema. Més concretament, les xarxes neuronals convolucionals (CNN) han estat una de les tècniques més utilitzades i estudiades per a l'anàlisi d'imatges, inclosa la imatge mèdica. En aquest treball es descriuen les principals aplicacions de CNN per a sistemes CAD totalment automàtics per ajudar en la rutina de diagnòstic clínic mitjançant ressonància magnètica cardíaca. El treball recull els principals punts a tenir en compte per desenvolupar aquest tipus de sistemes i presenta diferents resultats d'impacte en l'ús de CNN a la ressonància magnètica cardíaca, tots separats en tres projectes principals diferents, cobrint els problemes de la segmentació, estimació automàtica de *biomarcadores amb *explicabilidad i la detecció d'esdeveniments. El treball complet presentat descriu enfocaments nous i potents per aplicar CNN a l'anàlisi de ressonància magnètica cardíaca. El treball proporciona diversos descobriments clau, que permeten la integració de diverses maneres d'aquesta tecnologia nova però en constant creixement en sistemes CAD totalment automàtics que podrien produir resultats altament precisos, ràpids i fiables. Els resultats descrits milloraran i afectaran considerablement el flux de treball dels experts clínics en un futur proper. / [EN] Cardiovascular diseases are one of the most predominant causes of death and comorbidity in developed countries, as such heavy investments have been done in recent decades in order to produce high quality diagnosis tools and treatment applications for cardiac diseases. One of the best proven tools to characterize the heart has been magnetic resonance imaging (MRI), thanks to its high-resolution capabilities in both spatial and temporal dimensions, allowing to generate dynamic imaging of the heart that enable accurate diagnosis. The dimensions of the left ventricle and the ejection fraction derived from them are the most powerful predictors of cardiac morbidity and mortality, and their quantification has important connotations for the management and treatment of patients. Thus, cardiac MRI is the most accurate imaging technique for left ventricular assessment. In order to get an accurate and fast diagnosis, reliable image-based biomarker computation through image processing software is needed. Nowadays most of the employed tools rely in semi-automatic Computer-Aided Diagnosis (CAD) systems that require the clinical expert to interact with it, consuming valuable time from the professionals whose aim should only be at interpreting results. A paradigm shift is starting to get into the medical sector where fully automatic CAD systems do not require any kind of user interaction. These systems are designed to compute any required biomarkers for a correct diagnosis without impacting the physician natural workflow and can start their computations the moment an image is saved within a hospital archive system. Automatic CAD systems, although being highly regarded as one of next big advances in the radiology world, are extremely difficult to develop and rely on Artificial Intelligence (AI) technologies in order to reach medical standards. In this context, Deep learning (DL) has emerged in the past decade as the most successful technology to address this problem. More specifically, convolutional neural networks (CNN) have been one of the most successful and studied techniques for image analysis, including medical imaging. In this work we describe the main applications of CNN for fully automatic CAD systems to help in the clinical diagnostics routine by means of cardiac MRI. The work covers the main points to take into account in order to develop such systems and presents different impactful results within the use of CNN to cardiac MRI, all separated in three different main projects covering the segmentation, automatic biomarker estimation with explainability and event detection problems. The full work presented describes novel and powerful approaches to apply CNN to cardiac MRI analysis. The work provides several key findings, enabling the integration in several ways of this novel but non-stop growing technology into fully automatic CAD systems that could produce highly accurate, fast and reliable results. The results described will greatly improve and impact the workflow of the clinical experts in the near future. / Pérez Pelegrí, M. (2023). Applications of Deep Leaning on Cardiac MRI: Design Approaches for a Computer Aided Diagnosis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/192988

Page generated in 0.0312 seconds