• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 2
  • 1
  • 1
  • Tagged with
  • 47
  • 37
  • 30
  • 23
  • 17
  • 17
  • 17
  • 16
  • 12
  • 12
  • 12
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Wildfire Spread Prediction Using Attention Mechanisms In U-Net

Shah, Kamen Haresh, Shah, Kamen Haresh 01 December 2022 (has links) (PDF)
An investigation into using attention mechanisms for better feature extraction in wildfire spread prediction models. This research examines the U-net architecture to achieve image segmentation, a process that partitions images by classifying pixels into one of two classes. The deep learning models explored in this research integrate modern deep learning architectures, and techniques used to optimize them. The models are trained on 12 distinct observational variables derived from the Google Earth Engine catalog. Evaluation is conducted with accuracy, Dice coefficient score, ROC-AUC, and F1-score. This research concludes that when augmenting U-net with attention mechanisms, the attention component improves feature suppression and recognition, improving overall performance. Furthermore, employing ensemble modeling reduces bias and variation, leading to more consistent and accurate predictions. When inferencing on wildfire propagation at 30-minute intervals, the architecture presented in this research achieved a ROC-AUC score of 86.2% and an accuracy of 82.1%.
42

Modelling and Run-Time Control of Localization System for Resource-Constrained Devices / Modellering och Realtidsreglering av Lokaliseringssystem på Enheter med Begränsade Resurser

Mosskull, Albin January 2022 (has links)
As resource-constrained autonomous vehicles are used for more and more applications, their ability to achieve the lowest possible localization error without expending more power than needed is crucial. Despite this, the parameter settings of the localization systems, both for the platform and the application, are often set arbitrarily. In this thesis, we propose a model-based controller that adapts the parameters of the localization system during run-time by observing conditions in the environment. The test-bed used for experiments consists of maplab, a visual-inertial localization framework, that we execute on the Nvdia Jetson AGX platform. The results show that the linear velocity is the single most important environmental attribute to base the decision of when to update the parameters upon. We also found that while it was not possible to find a direct connection between certain parameters and environmental conditions, a connection could be found between sets of configuration parameters and conditions. Based on these conclusions, we compare model-based controller setups based on three different models: Finite Impulse Response (FIR), AutoRegressive eXogenous input (ARX) and Multi-Layer Perceptron (MLP). The FIR-based controller performed the best. This FIR-based controller is able to select configurations at the appropriate times to keep the error lower than it would be to randomly guess which set of configuration parameters is best. The proposed solution requires offline profiling before it can be implemented on new localization systems, but it can help to reduce the error and power consumption and thus enable more uses of resource-constrained devices. / Användningen av autonoma fordon med begränsade resurser ökar allt mer, vilket i sin tur ökar vikten av att dessa kan lokalisera med lägsta möjliga fel utan att förbruka mer effekt. Trots detta bestäms parametrarna för både hårdvara och i algoritmerna ofta godtyckligt för dessa lokaliseringssystem. I detta examensarbete presenterar vi en lösning till detta, i form av en modellbaserad regulator som anpassar parametrarna baserat på vad den detekterar i omgivningen. Vår testuppställning består av maplab, ett lokaliseringsramverk, som vi exekverar på Nvida Jetson AGX plattformen. Resultaten visar att den linjära hastigheten är den viktigaste miljövariabeln att detektera och använda för att anpassa parametrarna i lokaliseringssystemet. Resultaten visar även att det går att hitta kopplingar mellan konfigurationer och miljövariabler, även om det inte går att hitta mellan specifika konfigurationsparameterar och miljövariabler. Den regulator som presterar bäst visar sig vara en som är baserad på en Finite Impulse Response modell, med en optimeringshorisont på 5 sekunder. Denna presterar bättre än både AutoRegressive eXogenous input baserad regulator och en Multi-Layer Perceptron baserad regulator. Finite Impulse Response regulatorn åstadkommer ett fel som är lägre än slumpmässig gissning, på data den inte sett förut. Lösningen som uppvisas i detta projekt kräver optimering offline för att fungera, men om det utförs kan den reducera både lokaliseringsfelet och effektförbrukningen och genom det skapa nya användningsområden för resursbegränsade enheter.
43

Strojové učení ve strategických hrách / Machine Learning in Strategic Games

Vlček, Michael January 2018 (has links)
Machine learning is spearheading progress for the field of artificial intelligence in terms of providing competition in strategy games to a human opponent, be it in a game of chess, Go or poker. A field of machine learning, which shows the most promising results in playing strategy games, is reinforcement learning. The next milestone for the current research lies in a computer game Starcraft II, which outgrows the previous ones in terms of complexity, and represents a potential new breakthrough in this field. The paper focuses on analysis of the problem, and suggests a solution incorporating a reinforcement learning algorithm A2C and hyperparameter optimization implementation PBT, which could mean a step forward for the current progress.
44

Využití umělé inteligence v technické diagnostice / Utilization of artificial intelligence in technical diagnostics

Konečný, Antonín January 2021 (has links)
The diploma thesis is focused on the use of artificial intelligence methods for evaluating the fault condition of machinery. The evaluated data are from a vibrodiagnostic model for simulation of static and dynamic unbalances. The machine learning methods are applied, specifically supervised learning. The thesis describes the Spyder software environment, its alternatives, and the Python programming language, in which the scripts are written. It contains an overview with a description of the libraries (Scikit-learn, SciPy, Pandas ...) and methods — K-Nearest Neighbors (KNN), Support Vector Machines (SVM), Decision Trees (DT) and Random Forests Classifiers (RF). The results of the classification are visualized in the confusion matrix for each method. The appendix includes written scripts for feature engineering, hyperparameter tuning, evaluation of learning success and classification with visualization of the result.
45

Model-based hyperparameter optimization

Crouther, Paul 04 1900 (has links)
The primary goal of this work is to propose a methodology for discovering hyperparameters. Hyperparameters aid systems in convergence when well-tuned and handcrafted. However, to this end, poorly chosen hyperparameters leave practitioners in limbo, between concerns with implementation or improper choice in hyperparameter and system configuration. We specifically analyze the choice of learning rate in stochastic gradient descent (SGD), a popular algorithm. As a secondary goal, we attempt the discovery of fixed points using smoothing of the loss landscape by exploiting assumptions about its distribution to improve the update rule in SGD. Smoothing of the loss landscape has been shown to make convergence possible in large-scale systems and difficult black-box optimization problems. However, we use stochastic value gradients (SVG) to smooth the loss landscape by learning a surrogate model and then backpropagate through this model to discover fixed points on the real task SGD is trying to solve. Additionally, we construct a gym environment for testing model-free algorithms, such as Proximal Policy Optimization (PPO) as a hyperparameter optimizer for SGD. For tasks, we focus on a toy problem and analyze the convergence of SGD on MNIST using model-free and model-based reinforcement learning methods for control. The model is learned from the parameters of the true optimizer and used specifically for learning rates rather than for prediction. In experiments, we perform in an online and offline setting. In the online setting, we learn a surrogate model alongside the true optimizer, where hyperparameters are tuned in real-time for the true optimizer. In the offline setting, we show that there is more potential in the model-based learning methodology than in the model-free configuration due to this surrogate model that smooths out the loss landscape and makes for more helpful gradients during backpropagation. / L’objectif principal de ce travail est de proposer une méthodologie de découverte des hyperparamètres. Les hyperparamètres aident les systèmes à converger lorsqu’ils sont bien réglés et fabriqués à la main. Cependant, à cette fin, des hyperparamètres mal choisis laissent les praticiens dans l’incertitude, entre soucis de mise en oeuvre ou mauvais choix d’hyperparamètre et de configuration du système. Nous analysons spécifiquement le choix du taux d’apprentissage dans la descente de gradient stochastique (SGD), un algorithme populaire. Comme objectif secondaire, nous tentons de découvrir des points fixes en utilisant le lissage du paysage des pertes en exploitant des hypothèses sur sa distribution pour améliorer la règle de mise à jour dans SGD. Il a été démontré que le lissage du paysage des pertes rend la convergence possible dans les systèmes à grande échelle et les problèmes difficiles d’optimisation de la boîte noire. Cependant, nous utilisons des gradients de valeur stochastiques (SVG) pour lisser le paysage des pertes en apprenant un modèle de substitution, puis rétropropager à travers ce modèle pour découvrir des points fixes sur la tâche réelle que SGD essaie de résoudre. De plus, nous construisons un environnement de gym pour tester des algorithmes sans modèle, tels que Proximal Policy Optimization (PPO) en tant qu’optimiseur d’hyperparamètres pour SGD. Pour les tâches, nous nous concentrons sur un problème de jouet et analysons la convergence de SGD sur MNIST en utilisant des méthodes d’apprentissage par renforcement sans modèle et basées sur un modèle pour le contrôle. Le modèle est appris à partir des paramètres du véritable optimiseur et utilisé spécifiquement pour les taux d’apprentissage plutôt que pour la prédiction. Dans les expériences, nous effectuons dans un cadre en ligne et hors ligne. Dans le cadre en ligne, nous apprenons un modèle de substitution aux côtés du véritable optimiseur, où les hyperparamètres sont réglés en temps réel pour le véritable optimiseur. Dans le cadre hors ligne, nous montrons qu’il y a plus de potentiel dans la méthodologie d’apprentissage basée sur un modèle que dans la configuration sans modèle en raison de ce modèle de substitution qui lisse le paysage des pertes et crée des gradients plus utiles lors de la rétropropagation.
46

Incorporating Scene Depth in Discriminative Correlation Filters for Visual Tracking

Stynsberg, John January 2018 (has links)
Visual tracking is a computer vision problem where the task is to follow a targetthrough a video sequence. Tracking has many important real-world applications in several fields such as autonomous vehicles and robot-vision. Since visual tracking does not assume any prior knowledge about the target, it faces different challenges such occlusion, appearance change, background clutter and scale change. In this thesis we try to improve the capabilities of tracking frameworks using discriminative correlation filters by incorporating scene depth information. We utilize scene depth information on three main levels. First, we use raw depth information to segment the target from its surroundings enabling occlusion detection and scale estimation. Second, we investigate different visual features calculated from depth data to decide which features are good at encoding geometric information available solely in depth data. Third, we investigate handling missing data in the depth maps using a modified version of the normalized convolution framework. Finally, we introduce a novel approach for parameter search using genetic algorithms to find the best hyperparameters for our tracking framework. Experiments show that depth data can be used to estimate scale changes and handle occlusions. In addition, visual features calculated from depth are more representative if they were combined with color features. It is also shown that utilizing normalized convolution improves the overall performance in some cases. Lastly, the usage of genetic algorithms for hyperparameter search leads to accuracy gains as well as some insights on the performance of different components within the framework.
47

ML implementation for analyzing and estimating product prices / ML implementation för analys och estimation av produktpriser

Kenea, Abel Getachew, Fagerslett, Gabriel January 2024 (has links)
Efficient price management is crucial for companies with many different products to keep track of, leading to the common practice of price logging. Today, these prices are often adjusted manually, but setting prices manually can be labor-intensive and prone to human error. This project aims to use machine learning to assist in the pricing of products by estimating the prices to be inserted. Multiple machine learning models have been tested, and an artificial neural network has been implemented for estimating prices effectively. Through additional experimentation, the design of the network was fine-tuned to make it compatible with the project’s needs. The libraries used for implementing and managing the machine learning models are mainly ScikitLearn and TensorFlow. As a result, the trained model has been saved into a file and integrated with an API for accessibility.

Page generated in 0.226 seconds