Spelling suggestions: "subject:"learnability""
1 |
Artificial intelligence techniques for flood risk management in urban environmentsSayers, William Keith Paul January 2015 (has links)
Flooding is an important concern for the UK, as evidenced by the many extreme flooding events in the last decade. Improved flood risk intervention strategies are therefore highly desirable. The application of hydroinformatics tools, and optimisation algorithms in particular, which could provide guidance towards improved intervention strategies, is hindered by the necessity of performing flood modelling in the process of evaluating solutions. Flood modelling is a computationally demanding task; reducing its impact upon the optimisation process would therefore be a significant achievement and of considerable benefit to this research area. In this thesis sophisticated multi-objective optimisation algorithms have been utilised in combination with cutting-edge flood-risk assessment models to identify least-cost and most-benefit flood risk interventions that can be made on a drainage network. Software analysis and optimisation has improved the flood risk model performance. Additionally, artificial neural networks used as feature detectors have been employed as part of a novel development of an optimisation algorithm. This has alleviated the computational time-demands caused by using extremely complex models. The results from testing indicate that the developed algorithm with feature detectors outperforms (given limited computational resources available) a base multi-objective genetic algorithm. It does so in terms of both dominated hypervolume and a modified convergence metric, at each iteration. This indicates both that a shorter run of the algorithm produces a more optimal result than a similar length run of a chosen base algorithm, and also that a full run to complete convergence takes fewer iterations (and therefore less time) with the new algorithm.
|
2 |
Evoluční model s učením (LEM) pro optimalizační úlohy / Learnable Evolution Model for Optimization (LEM)Grunt, Pavel January 2014 (has links)
My thesis is dealing with the Learnable Evolution Model (LEM), a new evolutionary method of optimization, which employs a classification algorithm. The optimization process is guided by a characteristics of differences between groups of high and low performance solutions in the population. In this thesis I introduce new variants of LEM using classification algorithm AdaBoost or SVM. The qualities of proposed LEM variants were validated in a series of experiments in static and dynamic enviroment. The results have shown that the metod has better results with smaller group sizes. When compared to the Estimation of Distribution Algorithm, the LEM variants achieve comparable or better values faster. However, the LEM variant which combined the AdaBoost approach with the SVM approach had the best overall performance.
|
3 |
Incorporating Sparse Attention Mechanism into Transformer for Object Detection in Images / Inkludering av gles attention i en transformer för objektdetektering i bilderDuc Dao, Cuong January 2022 (has links)
DEtection TRansformer, DETR, introduces an innovative design for object detection based on softmax attention. However, the softmax operation produces dense attention patterns, i.e., all entries in the attention matrix receive a non-zero weight, regardless of their relevance for detection. In this work, we explore several alternatives to softmax to incorporate sparsity into the architecture of DETR. Specifically, we replace softmax with a sparse transformation from the α-entmax family: sparsemax and entmax-1.5, which induce a set amount of sparsity, and α-entmax, which treats sparsity as a learnable parameter of each attention head. In addition to evaluating the effect on detection performance, we examine the resulting attention maps from the perspective of explainability. To this end, we introduce three evaluation metrics to quantify the sparsity, complementing the qualitative observations. Although our experimental results on the COCO detection dataset do not show an increase in detection performance, we find that learnable sparsity provides more flexibility to the model and produces more explicative attention maps. To the best of our knowledge, we are the first to introduce learnable sparsity into the architecture of transformer-based object detectors. / DEtection Transformer, DETR, introducerar en innovativ design för objektdetektering baserad på softmax attention. Softmax producerar tät attention, alla element i attention-matrisen får en vikt skild från noll, oberoende av deras relevans för objektdetektering. Vi utforskar flera alternativ till softmax för att inkludera gleshet i DETRs arkitektur. Specifikt så ersätter vi softmax med en gles transformation från α-entmax familjen: sparsemax och entmax1.5, vilka inducerar en fördefinierad mängd gleshet, och α-entmax, som ser gleshet som en träningsbar parameter av varje attention-huvud. Förutom att evaluera effekten på detekteringsprestandan, så utforskar vi de resulterande attention-matriserna från ett förklarbarhetsperspektiv. Med det som mål så introducerar vi tre olika metriker för att evaluera gleshet, som ett komplement till de kvalitativa observationerna. Trots att våra experimentella resultat på COCO, ett utmanande dataset för objektdetektering, inte visar en ökning i detekteringsprestanda, så finner vi att träningsbar gleshet ökar modellens flexibilitet, och producerar mer förklarbara attentionmatriser. Såvitt vi vet så är vi de första som introducerar träningsbar gleshet i transformer-baserade arkitekturer för objektdetektering.
|
4 |
Link Prediction Using Learnable Topology Augmentation / Länkprediktion med hjälp av en inlärningsbar topologiförstärkningLeatherman, Tori January 2023 (has links)
Link prediction is a crucial task in many downstream applications of graph machine learning. Graph Neural Networks (GNNs) are a prominent approach for transductive link prediction, where the aim is to predict missing links or connections only within the existing nodes of a given graph. However, many real-life applications require inductive link prediction for the newly-coming nodes with no connections to the original graph. Thus, recent approaches have adopted a Multilayer Perceptron (MLP) for inductive link prediction based solely on node features. In this work, we show that incorporating both connectivity structure and features for the new nodes provides better model expressiveness. To bring such expressiveness to inductive link prediction, we propose LEAP, an encoder that features LEArnable toPology augmentation of the original graph and enables message passing with the newly-coming nodes. To the best of our knowledge, this is the first attempt to provide structural contexts for the newly-coming nodes via learnable augmentation under inductive settings. Conducting extensive experiments on four real- world homogeneous graphs demonstrates that LEAP significantly surpasses the state-of-the-art methods in terms of AUC and average precision. The improvements over homogeneous graphs are up to 22% and 17%, respectively. The code and datasets are available on GitHub*. / Att förutsäga länkar är en viktig uppgift i många efterföljande tillämpningar av maskininlärning av grafer. Graph Neural Networks (GNNs) är en framträdande metod för transduktiv länkförutsägelse, där målet är att förutsäga saknade länkar eller förbindelser endast inom de befintliga noderna i en given graf. I många verkliga tillämpningar krävs dock induktiv länkförutsägelse för nytillkomna noder utan kopplingar till den ursprungliga grafen. Därför har man på senare tid antagit en Multilayer Perceptron (MLP) för induktiv länkförutsägelse som enbart bygger på nodens egenskaper. I det här arbetet visar vi att om man införlivar både anslutningsstruktur och egenskaper för de nya noderna får man en bättre modelluttryck. För att ge induktiv länkförutsägelse en sådan uttrycksfullhet föreslår vi LEAP, en kodare som innehåller LEArnable toPology augmentation av den ursprungliga grafen och möjliggör meddelandeöverföring med de nytillkomna noderna. Såvitt vi vet är detta det första försöket att tillhandahålla strukturella sammanhang för de nytillkomna noderna genom en inlärningsbar ökning i induktiva inställningar. Omfattande experiment på fyra homogena grafer i den verkliga världen visar att LEAP avsevärt överträffar "state-of-the-art" metoderna när det gäller AUC och genomsnittlig precision. Förbättringarna jämfört med homogena grafer är upp till 22% och 17%. Koden och datamängderna finns tillgängliga på Github*.
|
Page generated in 0.0468 seconds