• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 3
  • 1
  • Tagged with
  • 52
  • 52
  • 26
  • 23
  • 22
  • 17
  • 15
  • 15
  • 14
  • 14
  • 12
  • 12
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Is eXplainable AI suitable as a hypotheses generating tool for medical research? Comparing basic pathology annotation with heat maps to find out

Adlersson, Albert January 2023 (has links)
Hypothesis testing has long been a formal and standardized process. Hypothesis generation, on the other hand, remains largely informal. This thesis assess whether eXplainable AI (XAI) can aid in the standardization of hypothesis generation through its utilization as a hypothesis generating tool for medical research. We produce XAI heat maps for a Convolutional Neural Network (CNN) trained to classify Microsatellite Instability (MSI) in colon and gastric cancer with four different XAI methods: Guided Backpropagation, VarGrad, Grad-CAM and Sobol Attribution. We then compare these heat maps with pathology annotations in order to look for differences to turn into new hypotheses. Our CNN successfully generates non-random XAI heat maps whilst achieving a validation accuracy of 85% and a validation AUC of 93% – as compared to others who achieve a AUC of 87%. Our results conclude that Guided Backpropagation and VarGrad are better at explaining high-level image features whereas Grad-CAM and Sobol Attribution are better at explaining low-level ones. This makes the two groups of XAI methods good complements to each other. Images of Microsatellite Insta- bility (MSI) with high differentiation are more difficult to analyse regardless of which XAI is used, probably due to exhibiting less regularity. Regardless of this drawback, our assessment is that XAI can be used as a useful hypotheses generating tool for research in medicine. Our results indicate that our CNN utilizes the same features as our basic pathology annotations when classifying MSI – with some additional features of basic pathology missing – features which we successfully are able to generate new hypotheses with.
42

[en] EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR MEDICAL IMAGE CLASSIFIERS / [pt] INTELIGÊNCIA ARTIFICIAL EXPLICÁVEL PARA CLASSIFICADORES DE IMAGENS MÉDICAS

IAM PALATNIK DE SOUSA 02 July 2021 (has links)
[pt] A inteligência artificial tem gerado resultados promissores na área médica, especialmente na última década. Contudo, os modelos de melhor desempenho apresentam opacidade em relação ao seu funcionamento interno. Nesta tese, são apresentadas novas metodologias e abordagens para o desenvolvimento de classificadores explicáveis de imagens médicas. Dois principais métodos, Squaregrid e EvEx, foram desenvolvidos. O primeiro consiste em uma geração mais grosseira, porém rápida, de heatmaps explicativos via segmentações em grades quadrados, enquanto o segundo baseia-se em otimização multi-objetivo, baseada em computação evolucionária, visando ao ajuste fino de parâmetros de segmentação. Notavelmente, ambas as técnicas são agnósticas ao modelo, o que facilita sua utilização para qualquer tipo de classificador de imagens. O potencial destas abordagens foi avaliado em três estudos de caso de classificações médicas: metástases em linfonodos, malária e COVID-19. Para alguns destes casos foram analisados modelos de classificação existentes, publicamente disponíveis. Por outro lado, em outros estudos de caso, novos modelos tiveram que ser treinados. No caso do estudo de COVID-19, a ResNet50 treinada levou a F-scores acima de 0,9 para o conjunto de teste de uma competição para classificação de coronavirus, levando ao terceiro lugar geral. Adicionalmente, técnicas de inteligência artificial já existentes como LIME e GradCAM, bem como Vanilla, Smooth e Integrated Gradients também foram usadas para gerar heatmaps e possibilitar comparações. Os resultados aqui descritos ajudaram a demonstrar e preencher parcialmente lacunas associadas à integração das áreas de inteligência artificial explicável e medicina. Eles também ajudaram a demonstrar que as diferentes abordagens de inteligência artificial explicável podem gerar heatmaps que focam em características diferentes da imagem. Isso por sua vez demonstra a importância de combinar abordagens para criar um panorama mais completo sobre os modelos classificadores, bem como extrair informações sobre o que estes aprendem. / [en] Artificial Intelligence has generated promissing results for the medical area, especially on the last decade. However, the best performing models present opacity when it comes to their internal working. In this thesis, methodologies and approaches are presented for the develpoment of explainable classifiers of medical images. Two main methods, Squaregrid and EvEx, were developed. The first consistts in a rough, but fast, generation of heatmaps via segmentations in square grids, and the second in genetic multi objective optimizations aiming at the fine-tuning of segmentation parameters. Notably, both techniques are agnostic to the model,which facilitates their utilization for any kind of image classifier. The potential of these approaches was demonstrated in three case studies of medical classifications: lymph node mestastases, malária and COVID-19. In some of these cases, already existing classifier models were analyzed, while in some others new models were trained. For the COVID-19 study, the trained ResNet50 provided F-scores above 0.9 in a test set from a coronavirus classification competition, resulting in the third place overall. Additionally, already existing explainable artificial intelligence techniques, such as LIME and GradCAM, as well as Vanilla, Smooth and Integrated Gradients, were also used to generate heatmaps and enable comparisons. The results here described help to demonstrate and improve the gaps in integrating the areas of explainable artificial intelligence and medicine. They also aided in demonstrating that the different types of approaches in explainable artificial intelligence can generate heatmaps that focus on different characteristics of the image. This shows the importance of combining approaches to create a more complete overview of classifier models, as well as extracting informations about what they learned from data.
43

Personalized fake news aware recommendation system

Sallami, Dorsaf 08 1900 (has links)
In today’s world, where online news is so widespread, various methods have been developed in order to provide users with personalized news recommendations. Wonderful accomplish ments have been made when it comes to providing readers with everything that could attract their attention. While accuracy is critical in news recommendation, other factors, such as diversity, novelty, and reliability, are essential in satisfying the readers’ satisfaction. In fact, technological advancements bring additional challenges which might have a detrimental im pact on the news domain. Therefore, researchers need to consider the new threats in the development of news recommendations. Fake news, in particular, is a hot topic in the media today and a new threat to public safety. This work presents a modularized system capable of recommending news to the user and detecting fake news, all while helping users become more aware of this issue. First, we suggest FANAR, FAke News Aware Recommender system, a modification to news recommendation algorithms that removes untrustworthy persons from the candidate user’s neighbourhood. To do this, we created a probabilistic model, the Beta Trust model, to calculate user rep utation. For the recommendation process, we employed Graph Neural Networks. Then, we propose EXMULF, EXplainable MUltimodal Content-based Fake News Detection Sys tem. It is tasked with the veracity analysis of information based on its textual content and the associated image, together with an Explainable AI (XAI) assistant that is tasked with combating the spread of fake news. Finally, we try to raise awareness about fake news by providing personalized alerts based on user reliability. To fulfill the objective of this work, we build a new dataset named FNEWR. Our exper iments reveal that EXMULF outperforms 10 state-of-the-art fake news detection models in terms of accuracy. It is also worth mentioning that FANAR , which takes into account vi sual information in news, outperforms competing approaches based only on textual content. Furthermore, it reduces the amount of fake news found in the recommendations list / De nos jours, où les actualités en ligne sont si répandues, diverses méthodes ont été dé veloppées afin de fournir aux utilisateurs des recommandations d’actualités personnalisées. De merveilleuses réalisations ont été faites lorsqu’il s’agit de fournir aux lecteurs tout ce qui pourrait attirer leur attention. Bien que la précision soit essentielle dans la recommandation d’actualités, d’autres facteurs, tels que la diversité, la nouveauté et la fiabilité, sont essentiels pour satisfaire la satisfaction des lecteurs. En fait, les progrès technologiques apportent des défis supplémentaires qui pourraient avoir un impact négatif sur le domaine de l’information. Par conséquent, les chercheurs doivent tenir compte des nouvelles menaces lors de l’élabo ration de nouvelles recommandations. Les fausses nouvelles, en particulier, sont un sujet brûlant dans les médias aujourd’hui et une nouvelle menace pour la sécurité publique. Au vu des faits mentionnés ci-dessus, ce travail présente un système modulaire capable de détecter les fausses nouvelles, de recommander des nouvelles à l’utilisateur et de les aider à être plus conscients de ce problème. Tout d’abord, nous suggérons FANAR, FAke News Aware Recommender system, une modification d’algorithme de recommandation d’actuali tés qui élimine les personnes non fiables du voisinage de l’utilisateur candidat. A cette fin, nous avons créé un modèle probabiliste, Beta Trust Model, pour calculer la réputation des utilisateurs. Pour le processus de recommandation, nous avons utilisé Graph Neural Net works. Ensuite, nous proposons EXMULF, EXplainable MUltimodal Content-based Fake News Detection System. Il s’agit de l’analyse de la véracité de l’information basée sur son contenu textuel et l’image associée, ainsi qu’un assistant d’intelligence artificielle Explicable (XAI) pour lutter contre la diffusion de fake news. Enfin, nous essayons de sensibiliser aux fake news en fournissant des alertes personnalisées basées sur le profil des utilisateurs. Pour remplir l’objectif de ce travail, nous construisons un nouveau jeu de données nommé FNEWR. Nos résultats expérimentaux montrent qu’EXMULF surpasse 10 modèles de pointe de détection de fausses nouvelles en termes de précision. Aussi, FANAR qui prend en compte les informations visuelles dans les actualités, surpasse les approches concurrentes basées uniquement sur le contenu textuel. De plus, il permet de réduire le nombre de fausses nouvelles dans la liste des recommandations.
44

Reciprocal Explanations : An Explanation Technique for Human-AI Partnership in Design Ideation / Ömsesidiga Förklaringar : En förklaringsteknik för Human-AI-samarbete inom konceptutveckling

Hegemann, Lena January 2020 (has links)
Advancements in creative artificial intelligence (AI) are leading to systems that can actively work together with designers in tasks such as ideation, i.e. the creation, development, and communication of ideas. In human group work, making suggestions and explaining the reasoning behind them as well as comprehending other group member’s explanations aids reflection, trust, alignment of goals and inspiration through diverse perspectives. Despite their ability to inspire through independent suggestions, state-of-the-art creative AI systems do not leverage these advantages of group work due to missing or one-sided explanations. For other use cases, AI systems that explain their reasoning are already gathering wide research interest. However, there is a knowledge gap on the effects of explanations on creativity. Furthermore, it is unknown whether a user can benefit from also explaining their contributions to an AI system. This thesis investigates whether reciprocal explanations, a novel technique which combines explanations from and to an AI system, improve the designers’ and AI’s joint exploration of ideas. I integrated reciprocal explanations into an AI aided tool for mood board design, a common method for ideation. In our implementation, the AI system uses text to explain which features of its suggestions match or complement the current mood board. Occasionally, it asks for user explanations providing several options for answers that it reacts to by aligning its strategy. A study was conducted with 16 professional designers who used the tool to create mood boards followed by presentations and semi-structured interviews. The study emphasized a need for explanations that make the principles of the system transparent and showed that alignment of goals motivated participants to provide explanations to the system. Also, enabling users to explain their contributions to the AI system facilitated reflection on their own reasons. / Framsteg inom kreativ artificiell intelligens (AI) har lett till system som aktivt kan samarbeta med designers under idéutformningsprocessen, dvs vid skapande, utveckling och kommunikation av idéer. I grupparbete är det viktigt att kunna göra förslag och förklara resonemanget bakom dem, samt förstå de andra gruppmedlemmarnas resonemang. Detta ökar reflektionsförmågan och förtroende hos medlemmarna, samt underlättar sammanjämkning av mål och ger inspiration genom att höra olika perspektiv. Trots att system, baserade på kreativ artificiell intelligens, har förmågan att inspirera genom sina oberoende förslag, utnyttjar de allra senaste kreativa AI-systemen inte dessa fördelar för att facilitera grupparbete. Detta är på grund av AI-systemens bristfälliga förmåga att resonera över sina förslag. Resonemangen är ofta ensidiga, eller saknas totalt. AI-system som kan förklara sina resonemang är redan ett stort forskningsintresse inom många användningsområden. Dock finns det brist på kunskap om AI-systemens påverkan på den kreativa processen. Dessutom är det okänt om en användare verkligen kan dra nytta av möjligheten att kunna förklara sina designbeslut till ett AI-system. Denna avhandling undersöker om ömsesidiga förklaringar, en ny teknik som kombinerar förklaringar från och till ett AI system, kan förbättra designerns och AI:s samarbete under utforskningen av idéer. Jag integrerade ömsesidiga förklaringar i ett AI-hjälpmedel som underlättar skapandet av stämningsplank (eng. mood board), som är en vanlig metod för konceptutveckling. I vår implementering använder AI-systemet textbeskrivningar för att förklara vilka delar av dess förslag som matchar eller kompletterar det nuvarande stämningsplanket. Ibland ber den användaren ge förklaringar, så den kan anpassa sin förslagsstrategi efter användarens önskemål. Vi genomförde en studie med 16 professionella designers som använde verktyget för att skapa stämningsplank. Feedback samlades genom presentationer och semistrukturerade intervjuer. Studien betonade behovet av förklaringar och resonemang som gör principerna bakom AI-systemet transparenta för användaren. Höjd sammanjämkning mellan användarens och systemets mål motiverade deltagarna att ge förklaringar till systemet. Genom att göra det möjligt för användare att förklara sina designbeslut för AI-systemet, förbättrades också användarens reflektionsförmåga över sina val.
45

[en] A CRITICAL VIEW ON THE INTERPRETABILITY OF MACHINE LEARNING MODELS / [pt] UMA VISÃO CRÍTICA SOBRE A INTERPRETABILIDADE DE MODELOS DE APRENDIZADO DE MÁQUINA

JORGE LUIZ CATALDO FALBO SANTO 29 July 2019 (has links)
[pt] À medida que os modelos de aprendizado de máquina penetram áreas críticas como medicina, sistema de justiça criminal e mercados financeiros, sua opacidade, que impede que as pessoas interpretem a maioria deles, se tornou um problema a ser resolvido. Neste trabalho, apresentamos uma nova taxonomia para classificar qualquer método, abordagem ou estratégia para lidar com o problema da interpretabilidade de modelos de aprendizado de máquina. A taxonomia proposta que preenche uma lacuna existente nas estruturas de taxonomia atuais em relação à percepção subjetiva de diferentes intérpretes sobre um mesmo modelo. Para avaliar a taxonomia proposta, classificamos as contribuições de artigos científicos relevantes da área. / [en] As machine learning models penetrate critical areas like medicine, the criminal justice system, and financial markets, their opacity, which hampers humans ability to interpret most of them, has become a problem to be solved. In this work, we present a new taxonomy to classify any method, approach or strategy to deal with the problem of interpretability of machine learning models. The proposed taxonomy fills a gap in the current taxonomy frameworks regarding the subjective perception of different interpreters about the same model. To evaluate the proposed taxonomy, we have classified the contributions of some relevant scientific articles in the area.
46

Artificial Drivers for Online Time-Optimal Vehicle Trajectory Planning and Control

Piccinini, Mattia 12 April 2024 (has links)
Recent advancements in time-optimal trajectory planning, control, and state estimation for autonomous vehicles have paved the way for the emerging field of autonomous racing. In the last 5-10 years, this form of racing has become a popular and challenging testbed for autonomous driving algorithms, aiming to enhance the safety and performance of future intelligent vehicles. In autonomous racing, the main goal is to develop real-time algorithms capable of autonomously maneuvering a vehicle around a racetrack, even in the presence of moving opponents. However, as a vehicle approaches its handling limits, several challenges arise for online trajectory planning and control. The vehicle dynamics become nonlinear and hard to capture with low-complexity models, while fast re-planning and good generalization capabilities are crucial to execute optimal maneuvers in unforeseen scenarios. These challenges leave several open research questions, three of which will be addressed in this thesis. The first explores developing accurate yet computationally efficient vehicle models for online time-optimal trajectory planning. The second focuses on enhancing learning-based methods for trajectory planning, control, and state estimation, overcoming issues like poor generalization and the need for large amounts of training data. The third investigates the optimality of online-executed trajectories with simplified vehicle models, compared to offline solutions of minimum-lap-time optimal control problems using high-fidelity vehicle models. This thesis consists of four parts, each of which addresses one or more of the aforementioned research questions, in the fields of time-optimal vehicle trajectory planning, control and state estimation. The first part of the thesis presents a novel artificial race driver (ARD), which autonomously learns to drive a vehicle around an obstacle-free circuit, performing online time-optimal vehicle trajectory planning and control. The following research questions are addressed in this part: How optimal is the trajectory executed online by an artificial agent that drives a high-fidelity vehicle model, in comparison with a minimum-lap-time optimal control problem (MLT-OCP), based on the same vehicle model and solved offline? Can the artificial agent generalize to circuits and conditions not seen during training? ARD employs an original neural network with a physics-driven internal structure (PhS-NN) for steering control, and a novel kineto-dynamical vehicle model for time-optimal trajectory planning. A new learning scheme enables ARD to progressively learn the nonlinear dynamics of an unknown vehicle. When tested on a high-fidelity model of a high-performance car, ARD achieves very similar results as an MLT-OCP, based on the same vehicle model and solved offline. When tested on a 1:8 vehicle prototype, ARD achieves similar lap times as an offline optimization problem. Thanks to its physics-driven architecture, ARD generalizes well to unseen circuits and scenarios, and is robust to unmodeled changes in the vehicle’s mass. The second part of the thesis deals with online time-optimal trajectory planning for dynamic obstacle avoidance. The research questions addressed in this part are: Can time-optimal trajectory planning for dynamic obstacle avoidance be performed online and with low computational times? How optimal is the resulting trajectory? Can the planner generalize to unseen circuits and scenarios? At each planning step, the proposed approach builds a tree of time-optimal motion primitives, by performing a sampling-based exploration in a local mesh of waypoints. The novel planner is validated in challenging scenarios with multiple dynamic opponents, and is shown to be computationally efficient, to return near-time-optimal trajectories, and to generalize well to new circuits and scenarios. The third part of the thesis shows an application of time-optimal trajectory planning with optimal control and PhS-NNs in the context of autonomous parking. The research questions addressed in this part are: Can an autonomous parking framework perform fast online trajectory planning and tracking in real-life parking scenarios, such as parallel, reverse and angle parking spots, and unstructured environments? Can the framework generalize to unknown variations in the vehicle’s parameters and road adherence, and operate with measurement noise? The autonomous parking framework employs a novel penalty function for collision avoidance with optimal control, a new warm-start strategy and an original PhS-NN for steering control. The framework executes complex maneuvers in a wide range of parking scenarios, and is validated with a high-fidelity vehicle model. The framework is shown to be robust to variations in the vehicle’s mass and road adherence, and to operate with realistic measurement noise. The fourth and last part of the thesis develops novel kinematics-structured neural networks (KS-NNs) to estimate the vehicle’s lateral velocity, which is a key quantity for time-optimal trajectory planning and control. The KS-NNs are a special type of PhS-NNs: their internal structure is designed to incorporate the kinematic principles, which enhances the generalization capabilities and physical explainability. The research questions addressed in this part are: Can a neural network-based lateral velocity estimator generalize well when tested on a vehicle not used for training? Can the network’s parameters be physically explainable? The approach is validated using an open dataset with two race cars. In comparison with traditional and neural network estimators of the literature, the KS-NNs improve noise rejection, exhibit better generalization capacity, are more sample-efficient, and their structure is physically explainable.
47

From Traditional to Explainable AI-Driven Predictive Maintenance : Transforming Maintenance Strategies at Glada Hudikhem with AI and Explainable AI

Rajta, Amarildo January 2024 (has links)
Detta arbete undersöker integreringen av artificiell intelligens (AI) och maskininlärning (ML) teknologier i prediktivt underhåll (PdM) vid Glada Hudikhem. De primära målen är att utvärdera effektiviteten hos olika AI/ML-modeller för att förutsäga fel på hushållsapparater och att förbättra transparensen och tillförlitligheten i dessa förutsägelser genom förklarbar AI (XAI) teknik. Studien jämför olika grundläggande och djupa inlärningsmodeller och avslöjar att medan djupa modeller kräver mer beräkningsresurser och kan ta 98% mer tid att träna jämfört med grundläggande modeller, presterar de ungefär 1, 4% sämre i F-1 poäng. F-1-poäng är ett mått som kombinerar precision (andelen av sanna positiva bland förväntade positiva) och recall/återkallelse (andelen av sanna positiva bland faktiska positiva). Dessutom betonar studien vikten av XAI för att göra AI-drivna underhållsbeslut mer transparenta och pålitliga, vilket därmed adresserar den "svarta lådan" naturen hos traditionella AI-modeller. Resultaten tyder på att integrationen av AI och XAI i PdM kan förbättra underhållsarbetsflöden och minska driftkostnaderna, med rekommendationer för branschpartners att utforska AI/ML-lösningar som balanserar resurseffektivitet och prestanda. Studien diskuterar också de etiska och samhälleliga konsekvenserna av AI-antagande och prediktivt underhåll, med betoning av ansvarsfull implementering. Vidare beskriver potentialen för AI att automatisera rutinunderhållsuppgifter, vilket frigör mänskliga resurser för mer komplexa frågor och förbättrar den övergripande drifteffektiviteten. Genom en omfattande analys, ger det här arbetet ett ramverk för framtida forskning och praktiska tillämpningar inom AI-drivet prediktivt underhåll. / This thesis investigates the integration of artificial intelligence (AI) and machine learning (ML) technologies into predictive maintenance (PdM) operations at Glada Hudikhem. The primary objectives are to evaluate the effectiveness of different AI/ML models for predicting household appliance failures and to enhance the transparency and reliability of these predictions through explainable AI (XAI) techniques. The study compares various shallow and deep learning models, revealing that while deep models require more computational resources and can take 98% more time to train compared to shallow models, they score about 1.4% worse in F-1 scores. F-1 scores are a metric that combines precision (the fraction of true positives among predicted positives) and recall (the fraction of true positives among actual positives). Additionally, the research highlights the importance of XAI in making AI-driven maintenance decisions more transparent and trustworthy, thus addressing the "black box" nature of traditional AI models. The findings suggest that integrating AI and XAI into PdM can improve maintenance workflows and reduce operational costs, with recommendations for industry partners to explore AI/ML solutions that balance resource efficiency and performance. The study also discusses the ethical and societal implications of AI adoption in predictive maintenance, emphasizing the need for responsible implementation. Furthermore, it outlines the potential for AI to automate routine maintenance tasks, thereby freeing up human resources for more complex issues and enhancing overall operational efficiency. Through a rigorous discussion and in-depth analysis, this thesis offers a robust framework for future research and practical applications in the field of AI-driven predictive maintenance.
48

Applications of Deep Leaning on Cardiac MRI: Design Approaches for a Computer Aided Diagnosis

Pérez Pelegrí, Manuel 27 April 2023 (has links)
[ES] Las enfermedades cardiovasculares son una de las causas más predominantes de muerte y comorbilidad en los países desarrollados, por ello se han realizado grandes inversiones en las últimas décadas para producir herramientas de diagnóstico y aplicaciones de tratamiento de enfermedades cardíacas de alta calidad. Una de las mejores herramientas de diagnóstico para caracterizar el corazón ha sido la imagen por resonancia magnética (IRM) gracias a sus capacidades de alta resolución tanto en la dimensión espacial como temporal, lo que permite generar imágenes dinámicas del corazón para un diagnóstico preciso. Las dimensiones del ventrículo izquierdo y la fracción de eyección derivada de ellos son los predictores más potentes de morbilidad y mortalidad cardiaca y su cuantificación tiene connotaciones importantes para el manejo y tratamiento de los pacientes. De esta forma, la IRM cardiaca es la técnica de imagen más exacta para la valoración del ventrículo izquierdo. Para obtener un diagnóstico preciso y rápido, se necesita un cálculo fiable de biomarcadores basados en imágenes a través de software de procesamiento de imágenes. Hoy en día la mayoría de las herramientas empleadas se basan en sistemas semiautomáticos de Diagnóstico Asistido por Computador (CAD) que requieren que el experto clínico interactúe con él, consumiendo un tiempo valioso de los profesionales cuyo objetivo debería ser únicamente interpretar los resultados. Un cambio de paradigma está comenzando a entrar en el sector médico donde los sistemas CAD completamente automáticos no requieren ningún tipo de interacción con el usuario. Estos sistemas están diseñados para calcular los biomarcadores necesarios para un diagnóstico correcto sin afectar el flujo de trabajo natural del médico y pueden iniciar sus cálculos en el momento en que se guarda una imagen en el sistema de archivo informático del hospital. Los sistemas CAD automáticos, aunque se consideran uno de los grandes avances en el mundo de la radiología, son extremadamente difíciles de desarrollar y dependen de tecnologías basadas en inteligencia artificial (IA) para alcanzar estándares médicos. En este contexto, el aprendizaje profundo (DL) ha surgido en la última década como la tecnología más exitosa para abordar este problema. Más específicamente, las redes neuronales convolucionales (CNN) han sido una de las técnicas más exitosas y estudiadas para el análisis de imágenes, incluidas las imágenes médicas. En este trabajo describimos las principales aplicaciones de CNN para sistemas CAD completamente automáticos para ayudar en la rutina de diagnóstico clínico mediante resonancia magnética cardíaca. El trabajo cubre los puntos principales a tener en cuenta para desarrollar tales sistemas y presenta diferentes resultados de alto impacto dentro del uso de CNN para resonancia magnética cardíaca, separados en tres proyectos diferentes que cubren su aplicación en la rutina clínica de diagnóstico, cubriendo los problemas de la segmentación, estimación automática de biomarcadores con explicabilidad y la detección de eventos. El trabajo completo presentado describe enfoques novedosos y de alto impacto para aplicar CNN al análisis de resonancia magnética cardíaca. El trabajo proporciona varios hallazgos clave, permitiendo varias formas de integración de esta reciente y creciente tecnología en sistemas CAD completamente automáticos que pueden producir resultados altamente precisos, rápidos y confiables. Los resultados descritos mejorarán e impactarán positivamente el flujo de trabajo de los expertos clínicos en un futuro próximo. / [CA] Les malalties cardiovasculars són una de les causes de mort i comorbiditat més predominants als països desenvolupats, s'han fet grans inversions en les últimes dècades per tal de produir eines de diagnòstic d'alta qualitat i aplicacions de tractament de malalties cardíaques. Una de les tècniques millor provades per caracteritzar el cor ha estat la imatge per ressonància magnètica (IRM), gràcies a les seves capacitats d'alta resolució tant en dimensions espacials com temporals, que permeten generar imatges dinàmiques del cor per a un diagnòstic precís. Les dimensions del ventricle esquerre i la fracció d'ejecció que se'n deriva són els predictors més potents de morbiditat i mortalitat cardíaca i la seva quantificació té connotacions importants per al maneig i tractament dels pacients. D'aquesta manera, la IRM cardíaca és la tècnica d'imatge més exacta per a la valoració del ventricle esquerre. Per obtenir un diagnòstic precís i ràpid, es necessita un càlcul fiable de biomarcadors basat en imatges mitjançant un programa de processament d'imatges. Actualment, la majoria de les ferramentes emprades es basen en sistemes semiautomàtics de Diagnòstic Assistit per ordinador (CAD) que requereixen que l'expert clínic interaccioni amb ell, consumint un temps valuós dels professionals, l'objectiu dels quals només hauria de ser la interpretació dels resultats. S'està començant a introduir un canvi de paradigma al sector mèdic on els sistemes CAD totalment automàtics no requereixen cap tipus d'interacció amb l'usuari. Aquests sistemes estan dissenyats per calcular els biomarcadors necessaris per a un diagnòstic correcte sense afectar el flux de treball natural del metge i poden iniciar els seus càlculs en el moment en què es deixa la imatge dins del sistema d'arxius hospitalari. Els sistemes CAD automàtics, tot i ser molt considerats com un dels propers grans avanços en el món de la radiologia, són extremadament difícils de desenvolupar i depenen de les tecnologies d'Intel·ligència Artificial (IA) per assolir els estàndards mèdics. En aquest context, l'aprenentatge profund (DL) ha sorgit durant l'última dècada com la tecnologia amb més èxit per abordar aquest problema. Més concretament, les xarxes neuronals convolucionals (CNN) han estat una de les tècniques més utilitzades i estudiades per a l'anàlisi d'imatges, inclosa la imatge mèdica. En aquest treball es descriuen les principals aplicacions de CNN per a sistemes CAD totalment automàtics per ajudar en la rutina de diagnòstic clínic mitjançant ressonància magnètica cardíaca. El treball recull els principals punts a tenir en compte per desenvolupar aquest tipus de sistemes i presenta diferents resultats d'impacte en l'ús de CNN a la ressonància magnètica cardíaca, tots separats en tres projectes principals diferents, cobrint els problemes de la segmentació, estimació automàtica de *biomarcadores amb *explicabilidad i la detecció d'esdeveniments. El treball complet presentat descriu enfocaments nous i potents per aplicar CNN a l'anàlisi de ressonància magnètica cardíaca. El treball proporciona diversos descobriments clau, que permeten la integració de diverses maneres d'aquesta tecnologia nova però en constant creixement en sistemes CAD totalment automàtics que podrien produir resultats altament precisos, ràpids i fiables. Els resultats descrits milloraran i afectaran considerablement el flux de treball dels experts clínics en un futur proper. / [EN] Cardiovascular diseases are one of the most predominant causes of death and comorbidity in developed countries, as such heavy investments have been done in recent decades in order to produce high quality diagnosis tools and treatment applications for cardiac diseases. One of the best proven tools to characterize the heart has been magnetic resonance imaging (MRI), thanks to its high-resolution capabilities in both spatial and temporal dimensions, allowing to generate dynamic imaging of the heart that enable accurate diagnosis. The dimensions of the left ventricle and the ejection fraction derived from them are the most powerful predictors of cardiac morbidity and mortality, and their quantification has important connotations for the management and treatment of patients. Thus, cardiac MRI is the most accurate imaging technique for left ventricular assessment. In order to get an accurate and fast diagnosis, reliable image-based biomarker computation through image processing software is needed. Nowadays most of the employed tools rely in semi-automatic Computer-Aided Diagnosis (CAD) systems that require the clinical expert to interact with it, consuming valuable time from the professionals whose aim should only be at interpreting results. A paradigm shift is starting to get into the medical sector where fully automatic CAD systems do not require any kind of user interaction. These systems are designed to compute any required biomarkers for a correct diagnosis without impacting the physician natural workflow and can start their computations the moment an image is saved within a hospital archive system. Automatic CAD systems, although being highly regarded as one of next big advances in the radiology world, are extremely difficult to develop and rely on Artificial Intelligence (AI) technologies in order to reach medical standards. In this context, Deep learning (DL) has emerged in the past decade as the most successful technology to address this problem. More specifically, convolutional neural networks (CNN) have been one of the most successful and studied techniques for image analysis, including medical imaging. In this work we describe the main applications of CNN for fully automatic CAD systems to help in the clinical diagnostics routine by means of cardiac MRI. The work covers the main points to take into account in order to develop such systems and presents different impactful results within the use of CNN to cardiac MRI, all separated in three different main projects covering the segmentation, automatic biomarker estimation with explainability and event detection problems. The full work presented describes novel and powerful approaches to apply CNN to cardiac MRI analysis. The work provides several key findings, enabling the integration in several ways of this novel but non-stop growing technology into fully automatic CAD systems that could produce highly accurate, fast and reliable results. The results described will greatly improve and impact the workflow of the clinical experts in the near future. / Pérez Pelegrí, M. (2023). Applications of Deep Leaning on Cardiac MRI: Design Approaches for a Computer Aided Diagnosis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/192988
49

Operativ cybersäkerhet: för och nackdelar med AI verktyg : En Förstudie

Jepsson, David, Tillman, Axel January 2023 (has links)
Denna studie undersöker för- och nackdelarna med att implementera artificiell intelligens (AI)som ett verktyg inom en Security Operations Center (SOC). Syftet med studien är att undersökaom och hur AI-verktyg kan underlätta incidenthantering inom en SOC, samt vilka nyautmaningar som uppstår.Studien har genomförts genom kvalitativa intervjuer med fyra personer med expertkunskaperinom både AI och cybersäkerhet. Experterna utfrågades om deras syn på AI som ett verktyg, hurde ser på AI och cybersäkerhet, samt hur AI kan appliceras relaterat till de 4 stegen inom NISTincidenthantering; förberedelser, detektion & analys, Identifiera, utrotning & återhämtning samtpost-incident aktivitet.Resultaten visar på både fördelar och nackdelar med att använda AI-verktyg inom SOC inklusiveeffektivare konfigurering av SIEM, lägre antal falska positiva larm, lättad arbetsbörda förSOC-analytiker och hantering av "zero-day" incidenter. Nackdelar inkluderar lägre förklarbarhetav större AI-modeller, juridiska utmaningar och beroendet av bra indata. Slutligen visar studienatt användningen av AI som ett verktyg i SOC kan vara fördelaktigt och att mer forskningbehövs för att utforska specifika tekniker och verktyg.
50

Explaining Neural Networks used for PIM Cancellation / Förklarandet av Neurala Nätverk menade för PIM-elimination

Diffner, Fredrik January 2022 (has links)
Passive Intermodulation is a type of distortion affecting the sensitive receiving signals in a cellular network, which is a growing problem in the telecommunication field. One way to mitigate this problem is through Passive Intermodulation Cancellation, where the predicted noise in a signal is modeled with polynomials. Recent experiments using neural networks instead of polynomials to model this noise have shown promising results. However, one drawback with neural networks is their lack of explainability. In this work, we identify a suitable method that provides explanations for this use case. We apply this technique to explain the neural networks used for Passive Intermodulation Cancellation and discuss the result with domain expertise. We show that the input space as well as the architecture could be altered, and propose an alternative architecture for the neural network used for Passive Intermodulation Cancellation. This alternative architecture leads to a significant reduction in trainable parameters, a finding which is valuable in a cellular network where resources are heavily constrained. When performing an explainability analysis of the alternative model, the explanations are also more in line with domain expertise. / Passiv Intermodulation är en typ av störning som påverkar de känsliga mottagarsignalerna i ett mobilnät. Detta är ett växande problem inom telekommunikation. Ett tillvägagångssätt för att motverka detta problem är genom passiv intermodulations-annullering, där störningarna modelleras med hjälp av polynomiska funktioner. Nyligen har experiment där neurala nätverk används istället för polynomiska funktioner för att modellera dessa störningar påvisat intressanta resultat. Användandet av neurala nätverk är dock förenat med vissa nackdelar, varav en är svårigheten att tyda och tolka utfall av neurala nätverk. I detta projekt identifieras en passande metod för att erbjuda förklaringar av neurala nätverk tränade för passiv intermodulations-annullering. Vi applicerar denna metod på nämnda neurala nätverk och utvärderar resultatet tillsammans med domänexpertis. Vi visar att formatet på indatan till neurala nätverket kan manipuleras, samt föreslår en alternativ arkitektur för neurala nätverk tränade för passiv intermodulations-annullering. Denna alternativa arkitektur innebär en avsevärd reduktion av antalet träningsbara parametrar, vilket är ett värdefullt resultat i samband med mobilnät där det finns kraftiga begränsningar på hårdvaruresurser. När vi applicerar metoder för att förklara utfall av denna alternativa arkitektur finner vi även att förklaringarna bättre motsvarar förväntningarna från domänexpertis.

Page generated in 0.0528 seconds