• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 4
  • 1
  • Tagged with
  • 68
  • 68
  • 37
  • 32
  • 29
  • 21
  • 19
  • 16
  • 16
  • 16
  • 15
  • 14
  • 13
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Transparent ML Systems for the Process Industry : How can a recommendation system perceived as transparent be designed for experts in the process industry?

Fikret, Eliz January 2023 (has links)
Process monitoring is a field that can greatly benefit from the adoption of machine learning solutions like recommendation systems. However, for domain experts to embrace these technologies within their work processes, clear explanations are crucial. Therefore, it is important to adopt user-centred methods for designing more transparent recommendation systems. This study explores this topic through a case study in the pulp and paper industry. By employing a user-centred and design-first adaptation of the question-driven design process, this study aims to uncover the explanation needs and requirements of industry experts, as well as formulate design visions and recommendations for transparent recommendation systems. The results of the study reveal five common explanation types that are valuable for domain experts while also highlighting limitations in previous studies on explanation types. Additionally, nine requirements are identified and utilised in the creation of a prototype, which domain experts evaluate. The evaluation process leads to the development of several design recommendations that can assist HCI researchers and designers in creating effective, transparent recommendation systems. Overall, this research contributes to the field of HCI by enhancing the understanding of transparent recommendation systems from a user-centred perspective.
52

CondBEHRT: A Conditional Probability Based Transformer for Modeling Medical Ontology

Lerjebo, Linus, Hägglund, Johannes January 2022 (has links)
In recent years the number of electronic healthcare records (EHRs)has increased rapidly. EHR represents a systematized collection of patient health information in a digital format. EHR systems maintain diagnoses, medications, procedures, and lab tests associated with the patients at each time they visit the hospital or care center. Since the information is available into multiple visits to hospitals or care centers, the EHR can be used to increasing quality care. This is especially useful when working with chronic diseases because they tend to evolve. There have been many deep learning methods that make use of these EHRs to solve different prediction tasks. Transformers have shown impressive results in many sequence-to-sequence tasks within natural language processing. This paper will mainly focus on using transformers, explicitly using a sequence of visits to do prediction tasks. The model presented in this paper is called CondBEHRT. Compared to previous state-of-art models, CondBEHRT will focus on using as much available data as possible to understand the patient’s trajectory. Based on all patients, the model will learn the medical ontology between diagnoses, medications, and procedures. The results show that the inferred medical ontology that has been learned can simulate reality quite well. Having the medical ontology also gives insights about the explainability of model decisions. We also compare the proposed model with the state-of-the-art methods using two different use cases; predicting the given codes in the next visit and predicting if the patient will be readmitted within 30 days.
53

Towards gradient faithfulness and beyond

Buono, Vincenzo, Åkesson, Isak January 2023 (has links)
The riveting interplay of industrialization, informalization, and exponential technological growth of recent years has shifted the attention from classical machine learning techniques to more sophisticated deep learning approaches; yet its intrinsic black-box nature has been impeding its widespread adoption in transparency-critical operations. In this rapidly evolving landscape, where the symbiotic relationship between research and practical applications has never been more interwoven, the contribution of this paper is twofold: advancing gradient faithfulness of CAM methods and exploring new frontiers beyond it. In the first part, we theorize three novel gradient-based CAM formulations, aimed at replacing and superseding traditional Grad-CAM-based methods by tackling and addressing the intricately and persistent vanishing and saturating gradient problems. As a consequence, our work introduces novel enhancements to Grad-CAM that reshape the conventional gradient computation by incorporating a customized and adapted technique inspired by the well-established and provably Expected Gradients’ difference-from-reference approach. Our proposed techniques– Expected Grad-CAM, Expected Grad-CAM++and Guided Expected Grad-CAM– as they operate directly on the gradient computation, rather than the recombination of the weighing factors, are designed as a direct and seamless replacement for Grad-CAM and any posterior work built upon it. In the second part, we build on our prior proposition and devise a novel CAM method that produces both high-resolution and class-discriminative explanation without fusing other methods, while addressing the issues of both gradient and CAM methods altogether. Our last and most advanced proposition, Hyper Expected Grad-CAM, challenges the current state and formulation of visual explanation and faithfulness and produces a new type of hybrid saliencies that satisfy the notion of natural encoding and perceived resolution. By rethinking faithfulness and resolution is possible to generate saliencies which are more detailed, localized, and less noisy, but most importantly that are composed of only concepts that are encoded by the layerwise models’ understanding. Both contributions have been quantitatively and qualitatively compared and assessed in a 5 to 10 times larger evaluation study on the ILSVRC2012 dataset against nine of the most recent and performing CAM techniques across six metrics. Expected Grad-CAM outperformed not only the original formulation but also more advanced methods, resulting in the second-best explainer with an Ins-Del score of 0.56. Hyper Expected Grad-CAM provided remarkable results across each quantitative metric, yielding a 0.15 increase in insertion when compared to the highest-scoring explainer PolyCAM, totaling to an Ins-Del score of 0.72.
54

Is eXplainable AI suitable as a hypotheses generating tool for medical research? Comparing basic pathology annotation with heat maps to find out

Adlersson, Albert January 2023 (has links)
Hypothesis testing has long been a formal and standardized process. Hypothesis generation, on the other hand, remains largely informal. This thesis assess whether eXplainable AI (XAI) can aid in the standardization of hypothesis generation through its utilization as a hypothesis generating tool for medical research. We produce XAI heat maps for a Convolutional Neural Network (CNN) trained to classify Microsatellite Instability (MSI) in colon and gastric cancer with four different XAI methods: Guided Backpropagation, VarGrad, Grad-CAM and Sobol Attribution. We then compare these heat maps with pathology annotations in order to look for differences to turn into new hypotheses. Our CNN successfully generates non-random XAI heat maps whilst achieving a validation accuracy of 85% and a validation AUC of 93% – as compared to others who achieve a AUC of 87%. Our results conclude that Guided Backpropagation and VarGrad are better at explaining high-level image features whereas Grad-CAM and Sobol Attribution are better at explaining low-level ones. This makes the two groups of XAI methods good complements to each other. Images of Microsatellite Insta- bility (MSI) with high differentiation are more difficult to analyse regardless of which XAI is used, probably due to exhibiting less regularity. Regardless of this drawback, our assessment is that XAI can be used as a useful hypotheses generating tool for research in medicine. Our results indicate that our CNN utilizes the same features as our basic pathology annotations when classifying MSI – with some additional features of basic pathology missing – features which we successfully are able to generate new hypotheses with.
55

[en] EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR MEDICAL IMAGE CLASSIFIERS / [pt] INTELIGÊNCIA ARTIFICIAL EXPLICÁVEL PARA CLASSIFICADORES DE IMAGENS MÉDICAS

IAM PALATNIK DE SOUSA 02 July 2021 (has links)
[pt] A inteligência artificial tem gerado resultados promissores na área médica, especialmente na última década. Contudo, os modelos de melhor desempenho apresentam opacidade em relação ao seu funcionamento interno. Nesta tese, são apresentadas novas metodologias e abordagens para o desenvolvimento de classificadores explicáveis de imagens médicas. Dois principais métodos, Squaregrid e EvEx, foram desenvolvidos. O primeiro consiste em uma geração mais grosseira, porém rápida, de heatmaps explicativos via segmentações em grades quadrados, enquanto o segundo baseia-se em otimização multi-objetivo, baseada em computação evolucionária, visando ao ajuste fino de parâmetros de segmentação. Notavelmente, ambas as técnicas são agnósticas ao modelo, o que facilita sua utilização para qualquer tipo de classificador de imagens. O potencial destas abordagens foi avaliado em três estudos de caso de classificações médicas: metástases em linfonodos, malária e COVID-19. Para alguns destes casos foram analisados modelos de classificação existentes, publicamente disponíveis. Por outro lado, em outros estudos de caso, novos modelos tiveram que ser treinados. No caso do estudo de COVID-19, a ResNet50 treinada levou a F-scores acima de 0,9 para o conjunto de teste de uma competição para classificação de coronavirus, levando ao terceiro lugar geral. Adicionalmente, técnicas de inteligência artificial já existentes como LIME e GradCAM, bem como Vanilla, Smooth e Integrated Gradients também foram usadas para gerar heatmaps e possibilitar comparações. Os resultados aqui descritos ajudaram a demonstrar e preencher parcialmente lacunas associadas à integração das áreas de inteligência artificial explicável e medicina. Eles também ajudaram a demonstrar que as diferentes abordagens de inteligência artificial explicável podem gerar heatmaps que focam em características diferentes da imagem. Isso por sua vez demonstra a importância de combinar abordagens para criar um panorama mais completo sobre os modelos classificadores, bem como extrair informações sobre o que estes aprendem. / [en] Artificial Intelligence has generated promissing results for the medical area, especially on the last decade. However, the best performing models present opacity when it comes to their internal working. In this thesis, methodologies and approaches are presented for the develpoment of explainable classifiers of medical images. Two main methods, Squaregrid and EvEx, were developed. The first consistts in a rough, but fast, generation of heatmaps via segmentations in square grids, and the second in genetic multi objective optimizations aiming at the fine-tuning of segmentation parameters. Notably, both techniques are agnostic to the model,which facilitates their utilization for any kind of image classifier. The potential of these approaches was demonstrated in three case studies of medical classifications: lymph node mestastases, malária and COVID-19. In some of these cases, already existing classifier models were analyzed, while in some others new models were trained. For the COVID-19 study, the trained ResNet50 provided F-scores above 0.9 in a test set from a coronavirus classification competition, resulting in the third place overall. Additionally, already existing explainable artificial intelligence techniques, such as LIME and GradCAM, as well as Vanilla, Smooth and Integrated Gradients, were also used to generate heatmaps and enable comparisons. The results here described help to demonstrate and improve the gaps in integrating the areas of explainable artificial intelligence and medicine. They also aided in demonstrating that the different types of approaches in explainable artificial intelligence can generate heatmaps that focus on different characteristics of the image. This shows the importance of combining approaches to create a more complete overview of classifier models, as well as extracting informations about what they learned from data.
56

Personalized fake news aware recommendation system

Sallami, Dorsaf 08 1900 (has links)
In today’s world, where online news is so widespread, various methods have been developed in order to provide users with personalized news recommendations. Wonderful accomplish ments have been made when it comes to providing readers with everything that could attract their attention. While accuracy is critical in news recommendation, other factors, such as diversity, novelty, and reliability, are essential in satisfying the readers’ satisfaction. In fact, technological advancements bring additional challenges which might have a detrimental im pact on the news domain. Therefore, researchers need to consider the new threats in the development of news recommendations. Fake news, in particular, is a hot topic in the media today and a new threat to public safety. This work presents a modularized system capable of recommending news to the user and detecting fake news, all while helping users become more aware of this issue. First, we suggest FANAR, FAke News Aware Recommender system, a modification to news recommendation algorithms that removes untrustworthy persons from the candidate user’s neighbourhood. To do this, we created a probabilistic model, the Beta Trust model, to calculate user rep utation. For the recommendation process, we employed Graph Neural Networks. Then, we propose EXMULF, EXplainable MUltimodal Content-based Fake News Detection Sys tem. It is tasked with the veracity analysis of information based on its textual content and the associated image, together with an Explainable AI (XAI) assistant that is tasked with combating the spread of fake news. Finally, we try to raise awareness about fake news by providing personalized alerts based on user reliability. To fulfill the objective of this work, we build a new dataset named FNEWR. Our exper iments reveal that EXMULF outperforms 10 state-of-the-art fake news detection models in terms of accuracy. It is also worth mentioning that FANAR , which takes into account vi sual information in news, outperforms competing approaches based only on textual content. Furthermore, it reduces the amount of fake news found in the recommendations list / De nos jours, où les actualités en ligne sont si répandues, diverses méthodes ont été dé veloppées afin de fournir aux utilisateurs des recommandations d’actualités personnalisées. De merveilleuses réalisations ont été faites lorsqu’il s’agit de fournir aux lecteurs tout ce qui pourrait attirer leur attention. Bien que la précision soit essentielle dans la recommandation d’actualités, d’autres facteurs, tels que la diversité, la nouveauté et la fiabilité, sont essentiels pour satisfaire la satisfaction des lecteurs. En fait, les progrès technologiques apportent des défis supplémentaires qui pourraient avoir un impact négatif sur le domaine de l’information. Par conséquent, les chercheurs doivent tenir compte des nouvelles menaces lors de l’élabo ration de nouvelles recommandations. Les fausses nouvelles, en particulier, sont un sujet brûlant dans les médias aujourd’hui et une nouvelle menace pour la sécurité publique. Au vu des faits mentionnés ci-dessus, ce travail présente un système modulaire capable de détecter les fausses nouvelles, de recommander des nouvelles à l’utilisateur et de les aider à être plus conscients de ce problème. Tout d’abord, nous suggérons FANAR, FAke News Aware Recommender system, une modification d’algorithme de recommandation d’actuali tés qui élimine les personnes non fiables du voisinage de l’utilisateur candidat. A cette fin, nous avons créé un modèle probabiliste, Beta Trust Model, pour calculer la réputation des utilisateurs. Pour le processus de recommandation, nous avons utilisé Graph Neural Net works. Ensuite, nous proposons EXMULF, EXplainable MUltimodal Content-based Fake News Detection System. Il s’agit de l’analyse de la véracité de l’information basée sur son contenu textuel et l’image associée, ainsi qu’un assistant d’intelligence artificielle Explicable (XAI) pour lutter contre la diffusion de fake news. Enfin, nous essayons de sensibiliser aux fake news en fournissant des alertes personnalisées basées sur le profil des utilisateurs. Pour remplir l’objectif de ce travail, nous construisons un nouveau jeu de données nommé FNEWR. Nos résultats expérimentaux montrent qu’EXMULF surpasse 10 modèles de pointe de détection de fausses nouvelles en termes de précision. Aussi, FANAR qui prend en compte les informations visuelles dans les actualités, surpasse les approches concurrentes basées uniquement sur le contenu textuel. De plus, il permet de réduire le nombre de fausses nouvelles dans la liste des recommandations.
57

Reciprocal Explanations : An Explanation Technique for Human-AI Partnership in Design Ideation / Ömsesidiga Förklaringar : En förklaringsteknik för Human-AI-samarbete inom konceptutveckling

Hegemann, Lena January 2020 (has links)
Advancements in creative artificial intelligence (AI) are leading to systems that can actively work together with designers in tasks such as ideation, i.e. the creation, development, and communication of ideas. In human group work, making suggestions and explaining the reasoning behind them as well as comprehending other group member’s explanations aids reflection, trust, alignment of goals and inspiration through diverse perspectives. Despite their ability to inspire through independent suggestions, state-of-the-art creative AI systems do not leverage these advantages of group work due to missing or one-sided explanations. For other use cases, AI systems that explain their reasoning are already gathering wide research interest. However, there is a knowledge gap on the effects of explanations on creativity. Furthermore, it is unknown whether a user can benefit from also explaining their contributions to an AI system. This thesis investigates whether reciprocal explanations, a novel technique which combines explanations from and to an AI system, improve the designers’ and AI’s joint exploration of ideas. I integrated reciprocal explanations into an AI aided tool for mood board design, a common method for ideation. In our implementation, the AI system uses text to explain which features of its suggestions match or complement the current mood board. Occasionally, it asks for user explanations providing several options for answers that it reacts to by aligning its strategy. A study was conducted with 16 professional designers who used the tool to create mood boards followed by presentations and semi-structured interviews. The study emphasized a need for explanations that make the principles of the system transparent and showed that alignment of goals motivated participants to provide explanations to the system. Also, enabling users to explain their contributions to the AI system facilitated reflection on their own reasons. / Framsteg inom kreativ artificiell intelligens (AI) har lett till system som aktivt kan samarbeta med designers under idéutformningsprocessen, dvs vid skapande, utveckling och kommunikation av idéer. I grupparbete är det viktigt att kunna göra förslag och förklara resonemanget bakom dem, samt förstå de andra gruppmedlemmarnas resonemang. Detta ökar reflektionsförmågan och förtroende hos medlemmarna, samt underlättar sammanjämkning av mål och ger inspiration genom att höra olika perspektiv. Trots att system, baserade på kreativ artificiell intelligens, har förmågan att inspirera genom sina oberoende förslag, utnyttjar de allra senaste kreativa AI-systemen inte dessa fördelar för att facilitera grupparbete. Detta är på grund av AI-systemens bristfälliga förmåga att resonera över sina förslag. Resonemangen är ofta ensidiga, eller saknas totalt. AI-system som kan förklara sina resonemang är redan ett stort forskningsintresse inom många användningsområden. Dock finns det brist på kunskap om AI-systemens påverkan på den kreativa processen. Dessutom är det okänt om en användare verkligen kan dra nytta av möjligheten att kunna förklara sina designbeslut till ett AI-system. Denna avhandling undersöker om ömsesidiga förklaringar, en ny teknik som kombinerar förklaringar från och till ett AI system, kan förbättra designerns och AI:s samarbete under utforskningen av idéer. Jag integrerade ömsesidiga förklaringar i ett AI-hjälpmedel som underlättar skapandet av stämningsplank (eng. mood board), som är en vanlig metod för konceptutveckling. I vår implementering använder AI-systemet textbeskrivningar för att förklara vilka delar av dess förslag som matchar eller kompletterar det nuvarande stämningsplanket. Ibland ber den användaren ge förklaringar, så den kan anpassa sin förslagsstrategi efter användarens önskemål. Vi genomförde en studie med 16 professionella designers som använde verktyget för att skapa stämningsplank. Feedback samlades genom presentationer och semistrukturerade intervjuer. Studien betonade behovet av förklaringar och resonemang som gör principerna bakom AI-systemet transparenta för användaren. Höjd sammanjämkning mellan användarens och systemets mål motiverade deltagarna att ge förklaringar till systemet. Genom att göra det möjligt för användare att förklara sina designbeslut för AI-systemet, förbättrades också användarens reflektionsförmåga över sina val.
58

Leveraging Explainable Machine Learning to Raise Awareness among Preadolescents about Gender Bias in Supervised Learning / Användning av förklarningsbar maskininlärning för att öka medvetenhet bland ungdomar om könsbias i övervakad inlärning

Melsion Perez, Gaspar Isaac January 2020 (has links)
Machine learning systems have become ubiquitous into our society. This has raised concerns about the potential discrimination that these systems might exert due to unconscious bias present in the data, for example regarding gender and race. Whilst this issue has been proposed as an essential subject to be included in the new AI curricula for schools, research has shown that it is a difficult topic to grasp by students. This thesis aims to develop an educational platform tailored to raise the awareness of the societal implications of gender bias in supervised learning. It assesses whether using an explainable model has a positive effect in teaching the impacts of gender bias to preadolescents from 10 to 13 years old. A study was carried out at a school in Stockholm employing an online platform with a classifier incorporating Grad-CAM as the explainability technique that enables it to visually explain its own predictions. The students were divided into two groups differentiated by the use of the explainable model or not. Analysis of the answers demonstrates that preadolescents significantly improve their understanding of the concept of bias in terms of gender discrimination when they interact with the explainable model, highlighting its suitability for educational programs. / Maskininlärningssystemen har blivit allmänt förekommande i vårt samhälle, vilket har lett till oro över den potentiella diskriminering som dessa system kan utöva när det gäller kön och ras. Detta med orsak av det bias som finns i datan. Även om detta problem har föreslagits som ett viktigt ämne som ska ingå i de nya AI-läroplanerna för skolor, har forskning visat att det är ett svårt ämne att förstå för studenter. Detta examensarbete syftar till att utveckla en utbildningsplattform för att öka medvetenhet om de samhälleliga konsekvenserna av könsbias inom övervakad maskinlärning. Det utvärderar huruvida användning av en förklaringsbar modell har en positiv effekt vid inlärning hos ungdomar mellan 10 och 13 år när det kommer till konsekvenserna av könsbias. En studie genomfördes på en skola i Stockholm med hjälp av en onlineplattform som använder en klassificeringsalgoritm med Grad-CAM förklaringsbar teknik som gör det möjligt för den att visuellt förklara sina egna förutsägelser. Eleverna delades in i två grupper som åtskiljdes genom att den ena gruppen använde den förklarbara modellen medan den andra inte gjorde det. Analysen av svaren visar att ungdomar markant förbättrar sin förståelse av könsdiskrimineringsbias när de interagerar med den förklarbara modellen, vilket lyfter fram dess lämplighet för användning inom utbildningsprogram.
59

[en] A CRITICAL VIEW ON THE INTERPRETABILITY OF MACHINE LEARNING MODELS / [pt] UMA VISÃO CRÍTICA SOBRE A INTERPRETABILIDADE DE MODELOS DE APRENDIZADO DE MÁQUINA

JORGE LUIZ CATALDO FALBO SANTO 29 July 2019 (has links)
[pt] À medida que os modelos de aprendizado de máquina penetram áreas críticas como medicina, sistema de justiça criminal e mercados financeiros, sua opacidade, que impede que as pessoas interpretem a maioria deles, se tornou um problema a ser resolvido. Neste trabalho, apresentamos uma nova taxonomia para classificar qualquer método, abordagem ou estratégia para lidar com o problema da interpretabilidade de modelos de aprendizado de máquina. A taxonomia proposta que preenche uma lacuna existente nas estruturas de taxonomia atuais em relação à percepção subjetiva de diferentes intérpretes sobre um mesmo modelo. Para avaliar a taxonomia proposta, classificamos as contribuições de artigos científicos relevantes da área. / [en] As machine learning models penetrate critical areas like medicine, the criminal justice system, and financial markets, their opacity, which hampers humans ability to interpret most of them, has become a problem to be solved. In this work, we present a new taxonomy to classify any method, approach or strategy to deal with the problem of interpretability of machine learning models. The proposed taxonomy fills a gap in the current taxonomy frameworks regarding the subjective perception of different interpreters about the same model. To evaluate the proposed taxonomy, we have classified the contributions of some relevant scientific articles in the area.
60

Artificial Drivers for Online Time-Optimal Vehicle Trajectory Planning and Control

Piccinini, Mattia 12 April 2024 (has links)
Recent advancements in time-optimal trajectory planning, control, and state estimation for autonomous vehicles have paved the way for the emerging field of autonomous racing. In the last 5-10 years, this form of racing has become a popular and challenging testbed for autonomous driving algorithms, aiming to enhance the safety and performance of future intelligent vehicles. In autonomous racing, the main goal is to develop real-time algorithms capable of autonomously maneuvering a vehicle around a racetrack, even in the presence of moving opponents. However, as a vehicle approaches its handling limits, several challenges arise for online trajectory planning and control. The vehicle dynamics become nonlinear and hard to capture with low-complexity models, while fast re-planning and good generalization capabilities are crucial to execute optimal maneuvers in unforeseen scenarios. These challenges leave several open research questions, three of which will be addressed in this thesis. The first explores developing accurate yet computationally efficient vehicle models for online time-optimal trajectory planning. The second focuses on enhancing learning-based methods for trajectory planning, control, and state estimation, overcoming issues like poor generalization and the need for large amounts of training data. The third investigates the optimality of online-executed trajectories with simplified vehicle models, compared to offline solutions of minimum-lap-time optimal control problems using high-fidelity vehicle models. This thesis consists of four parts, each of which addresses one or more of the aforementioned research questions, in the fields of time-optimal vehicle trajectory planning, control and state estimation. The first part of the thesis presents a novel artificial race driver (ARD), which autonomously learns to drive a vehicle around an obstacle-free circuit, performing online time-optimal vehicle trajectory planning and control. The following research questions are addressed in this part: How optimal is the trajectory executed online by an artificial agent that drives a high-fidelity vehicle model, in comparison with a minimum-lap-time optimal control problem (MLT-OCP), based on the same vehicle model and solved offline? Can the artificial agent generalize to circuits and conditions not seen during training? ARD employs an original neural network with a physics-driven internal structure (PhS-NN) for steering control, and a novel kineto-dynamical vehicle model for time-optimal trajectory planning. A new learning scheme enables ARD to progressively learn the nonlinear dynamics of an unknown vehicle. When tested on a high-fidelity model of a high-performance car, ARD achieves very similar results as an MLT-OCP, based on the same vehicle model and solved offline. When tested on a 1:8 vehicle prototype, ARD achieves similar lap times as an offline optimization problem. Thanks to its physics-driven architecture, ARD generalizes well to unseen circuits and scenarios, and is robust to unmodeled changes in the vehicle’s mass. The second part of the thesis deals with online time-optimal trajectory planning for dynamic obstacle avoidance. The research questions addressed in this part are: Can time-optimal trajectory planning for dynamic obstacle avoidance be performed online and with low computational times? How optimal is the resulting trajectory? Can the planner generalize to unseen circuits and scenarios? At each planning step, the proposed approach builds a tree of time-optimal motion primitives, by performing a sampling-based exploration in a local mesh of waypoints. The novel planner is validated in challenging scenarios with multiple dynamic opponents, and is shown to be computationally efficient, to return near-time-optimal trajectories, and to generalize well to new circuits and scenarios. The third part of the thesis shows an application of time-optimal trajectory planning with optimal control and PhS-NNs in the context of autonomous parking. The research questions addressed in this part are: Can an autonomous parking framework perform fast online trajectory planning and tracking in real-life parking scenarios, such as parallel, reverse and angle parking spots, and unstructured environments? Can the framework generalize to unknown variations in the vehicle’s parameters and road adherence, and operate with measurement noise? The autonomous parking framework employs a novel penalty function for collision avoidance with optimal control, a new warm-start strategy and an original PhS-NN for steering control. The framework executes complex maneuvers in a wide range of parking scenarios, and is validated with a high-fidelity vehicle model. The framework is shown to be robust to variations in the vehicle’s mass and road adherence, and to operate with realistic measurement noise. The fourth and last part of the thesis develops novel kinematics-structured neural networks (KS-NNs) to estimate the vehicle’s lateral velocity, which is a key quantity for time-optimal trajectory planning and control. The KS-NNs are a special type of PhS-NNs: their internal structure is designed to incorporate the kinematic principles, which enhances the generalization capabilities and physical explainability. The research questions addressed in this part are: Can a neural network-based lateral velocity estimator generalize well when tested on a vehicle not used for training? Can the network’s parameters be physically explainable? The approach is validated using an open dataset with two race cars. In comparison with traditional and neural network estimators of the literature, the KS-NNs improve noise rejection, exhibit better generalization capacity, are more sample-efficient, and their structure is physically explainable.

Page generated in 0.0829 seconds