• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 11
  • 8
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 91
  • 91
  • 66
  • 35
  • 35
  • 23
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • 15
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Estudo de sistemas magnéticos modeláveis mediante sub-redes

Rodrigues, Aline do Nascimento 25 July 2014 (has links)
We have modeled some magnetic systems, which consists of a number of sublattices, in the mean field approximation. This is possible in crystalline systems formed by two or more magnetic ions coupled by specific interactions such as the crystal field, exchange, among others. The main idea is to solve the microscopic Hamiltonian that models a given magnetic system in order to obtain their magnetic equation of state: M (H, T). For this, we use the appropriate arrangements to different magnetic (ferro-, ferri- and antiferro- magnetic) scheme sublattices. From the solutions of the Hamiltonian (eigenvalues and eigenvectors), physical quantities of interest were determined. In principle we consider systems with localized magnetism due to 3d and 4f electrons with the participation of non-magnetic ligands including 3d-4f systems with the presence of crystal field. In this dissertation we use the model of two-and three sublattices in order to obtain the equation of state for the following systems: RKKY exchange in RNi2B2C, superexchange in (Y3-zRz)(T1xFe1-x)(T2yFe3-y)O12, LixFe3- xO4 and (NixMn1-x)1.5[Cr(CN)6]. In these formulas, R represents a rare earth ion, T1 and T2 represent non-magnetic ions. Some representative cases are presented to illustrate the different equations of state and behavior of sublattices, metamagnetism, temperature compensation, etc. The extension to other similar systems can be direct or need to incorporate additional phenomenological parameters. / Consideramos neste estudo sistemas magnéticos modeláveis mediante sub- redes na aproximação do campo médio. Isto é possível em sistemas cristalinos formados por dois ou mais íons magnéticos acoplados por interações específicas como as do campo cristalino, troca, entre outros. A ideia central é resolver o hamiltoniano microscópico que modela um determinado sistema magnético de maneira a se obter sua equação de estado magnética: M(H,T). Para isto usamos o esquema de sub- redes adequado a diferentes arranjos magnéticos (ferro-, ferri- e antiferro- magnéticos). Com as soluções do hamiltoniano (autovalores e autovetores), grandezas físicas de interesse foram determinadas. Em princípio, consideramos sistemas com magnetismo localizado devido a elétrons 3d e 4f com participação de ligandos não magnéticos incluindo sistemas 3d-4f com presença de campo cristalino. Neste trabalho de dissertação empregamos o modelo de duas e três sub- redes para obter as equações de estado nos seguintes sistemas: troca tipo RKKY em RNi2B2C, supertroca em (Y3-zRz)(T1xFe1-x)(T2yFe3-y)O12, LixFe3-xO4 e (NixMn1- x)1.5[Cr(CN)6]. Nestas fórmulas, R representa um íon de terra rara, T1 e T2 representam íons não magnéticos. Alguns casos representativos são apresentados para exemplificar as diferentes equações de estado e comportamento das sub- redes, metamagnetismo, temperatura de compensação etc. Em princípio, a extensão para outros sistemas semelhantes, usando os modelos apresentados aqui, pode ser direta ou precisar incorporar parâmetros fenomenológicos adicionais.
62

Estudo do campo cristalino em oxihaletos dopados com íons Eu3+

Portela, Irlan Marques Cunha 15 March 2013 (has links)
In this work we applied to a series of oxyhalides crystals, namely, GdOBr, LaOI, GdOCl, LaOCl, YOCl and LaOBr, all doped ion Eu3 +, the point charge electrostatic model (PCEM), the simple overlap model (SOM) and method of equivalent nearest neighbors (MENN), with the objective of discussing the magnitude of the charges of the ions in this halogens series. Using the local structure of luminescent site, calculations were made of the crystal field parameters and splitting of the 7F1 level. The point charge electrostatic model, as expected, led to satisfactory predictions only from the qualitative point of view. The simple overlap model and the method gave satisfactory predictions to all quantities uce the experimental splitting energy level 7F1. It is shown that the effect of O2- ions is dominant in the calculation of crystal field parameters and charge factors of the halogens has been always smaller than those of O2- ions, although in some cases the NN charge factors was greater than the their valence, when the SOM is applied. This is not completely understood up to now. / Neste trabalho foram aplicados a uma serie de cristais oxihaletos, a saber, GdOBr, LaOI, GdOCl, LaOCl, YOCl e LaOBr, todos dopados com o ion Eu3+, o Modelo Eletrostatico de Cargas Pontuais, o Modelo de Recobrimento Simples e o Metodo dos Vizinhos Equivalentes, com o objetivo de discutir a magnitude das cargas dos ions halogenios nesta serie. Usando a estrutura local do sitio luminescente, foram feitos calculos de parametros do campo cristalino (Bkq) e do desdobramento do nivel 7F1 ( ´E). O modelo eletrostatico de cargas pontuais, como esperado, levou a resultados satisfatorios apenas do ponto de vista qualitativo. Ja com o modelo de recobrimento simples e com o metodo foi possivel reproduzir ´E. As previsoes mostram que o efeito dos ions O2- e dominante nas previsoes dos Bkq e ´E e os fatores de carga dos halogenios sao muito menores que os dos ions O2-, embora em alguns casos a carga dos primeiros vizinhos tenha sido maior que a valencia respectiva, quando o modelo de recobrimento simples e aplicado, o que ainda nao e completamente entendido.
63

Machine learning strategies for multi-step-ahead time series forecasting

Ben Taieb, Souhaib 08 October 2014 (has links)
How much electricity is going to be consumed for the next 24 hours? What will be the temperature for the next three days? What will be the number of sales of a certain product for the next few months? Answering these questions often requires forecasting several future observations from a given sequence of historical observations, called a time series. <p><p>Historically, time series forecasting has been mainly studied in econometrics and statistics. In the last two decades, machine learning, a field that is concerned with the development of algorithms that can automatically learn from data, has become one of the most active areas of predictive modeling research. This success is largely due to the superior performance of machine learning prediction algorithms in many different applications as diverse as natural language processing, speech recognition and spam detection. However, there has been very little research at the intersection of time series forecasting and machine learning.<p><p>The goal of this dissertation is to narrow this gap by addressing the problem of multi-step-ahead time series forecasting from the perspective of machine learning. To that end, we propose a series of forecasting strategies based on machine learning algorithms.<p><p>Multi-step-ahead forecasts can be produced recursively by iterating a one-step-ahead model, or directly using a specific model for each horizon. As a first contribution, we conduct an in-depth study to compare recursive and direct forecasts generated with different learning algorithms for different data generating processes. More precisely, we decompose the multi-step mean squared forecast errors into the bias and variance components, and analyze their behavior over the forecast horizon for different time series lengths. The results and observations made in this study then guide us for the development of new forecasting strategies.<p><p>In particular, we find that choosing between recursive and direct forecasts is not an easy task since it involves a trade-off between bias and estimation variance that depends on many interacting factors, including the learning model, the underlying data generating process, the time series length and the forecast horizon. As a second contribution, we develop multi-stage forecasting strategies that do not treat the recursive and direct strategies as competitors, but seek to combine their best properties. More precisely, the multi-stage strategies generate recursive linear forecasts, and then adjust these forecasts by modeling the multi-step forecast residuals with direct nonlinear models at each horizon, called rectification models. We propose a first multi-stage strategy, that we called the rectify strategy, which estimates the rectification models using the nearest neighbors model. However, because recursive linear forecasts often need small adjustments with real-world time series, we also consider a second multi-stage strategy, called the boost strategy, that estimates the rectification models using gradient boosting algorithms that use so-called weak learners.<p><p>Generating multi-step forecasts using a different model at each horizon provides a large modeling flexibility. However, selecting these models independently can lead to irregularities in the forecasts that can contribute to increase the forecast variance. The problem is exacerbated with nonlinear machine learning models estimated from short time series. To address this issue, and as a third contribution, we introduce and analyze multi-horizon forecasting strategies that exploit the information contained in other horizons when learning the model for each horizon. In particular, to select the lag order and the hyperparameters of each model, multi-horizon strategies minimize forecast errors over multiple horizons rather than just the horizon of interest.<p><p>We compare all the proposed strategies with both the recursive and direct strategies. We first apply a bias and variance study, then we evaluate the different strategies using real-world time series from two past forecasting competitions. For the rectify strategy, in addition to avoiding the choice between recursive and direct forecasts, the results demonstrate that it has better, or at least has close performance to, the best of the recursive and direct forecasts in different settings. For the multi-horizon strategies, the results emphasize the decrease in variance compared to single-horizon strategies, especially with linear or weakly nonlinear data generating processes. Overall, we found that the accuracy of multi-step-ahead forecasts based on machine learning algorithms can be significantly improved if an appropriate forecasting strategy is used to select the model parameters and to generate the forecasts.<p><p>Lastly, as a fourth contribution, we have participated in the Load Forecasting track of the Global Energy Forecasting Competition 2012. The competition involved a hierarchical load forecasting problem where we were required to backcast and forecast hourly loads for a US utility with twenty geographical zones. Our team, TinTin, ranked fifth out of 105 participating teams, and we have been awarded an IEEE Power & Energy Society award.<p> / Doctorat en sciences, Spécialisation Informatique / info:eu-repo/semantics/nonPublished
64

Entropic measures of connectivity with an application to intracerebral epileptic signals / Mesures entropiques de connectivité avec application à l'épilepsie

Zhu, Jie 22 June 2016 (has links)
Les travaux présentés dans cette thèse s'inscrivent dans la problématique de la connectivité cérébrale, connectivité tripartite puisqu'elle sous-tend les notions de connectivité structurelle, fonctionnelle et effective. Ces trois types de connectivité que l'on peut considérer à différentes échelles d'espace et de temps sont bien évidemment liés et leur analyse conjointe permet de mieux comprendre comment structures et fonctions cérébrales se contraignent mutuellement. Notre recherche relève plus particulièrement de la connectivité effective qui permet de définir des graphes de connectivité qui renseignent sur les liens causaux, directs ou indirects, unilatéraux ou bilatéraux via des chemins de propagation, représentés par des arcs, entre les nœuds, ces derniers correspondant aux régions cérébrales à l'échelle macroscopique. Identifier les interactions entre les aires cérébrales impliquées dans la génération et la propagation des crises épileptiques à partir d'enregistrements intracérébraux est un enjeu majeur dans la phase pré-chirurgicale et l'objectif principal de notre travail. L'exploration de la connectivité effective suit généralement deux approches, soit une approche basée sur les modèles, soit une approche conduite par les données comme nous l'envisageons dans le cadre de cette thèse où les outils développés relèvent de la théorie de l'information et plus spécifiquement de l'entropie de transfert, la question phare que nous adressons étant celle de la précision des estimateurs de cette grandeur dans le cas des méthodes développées basées sur les plus proches voisins. Les approches que nous proposons qui réduisent le biais au regard d'estimateurs issus de la littérature sont évaluées et comparées sur des signaux simulés de type bruits blancs, processus vectoriels autorégressifs linéaires et non linéaires, ainsi que sur des modèles physiologiques réalistes avant d'être appliquées sur des signaux électroencéphalographiques de profondeur enregistrés sur un patient épileptique et comparées à une approche assez classique basée sur la fonction de transfert dirigée. En simulation, dans les situations présentant des non-linéarités, les résultats obtenus permettent d'apprécier la réduction du biais d'estimation pour des variances comparables vis-à-vis des techniques connues. Si les informations recueillies sur les données réelles sont plus difficiles à analyser, elles montrent certaines cohérences entre les méthodes même si les résultats préliminaires obtenus s'avèrent davantage en accord avec les conclusions des experts cliniciens en appliquant la fonction de transfert dirigée. / The work presented in this thesis deals with brain connectivity, including structural connectivity, functional connectivity and effective connectivity. These three types of connectivities are obviously linked, and their joint analysis can give us a better understanding on how brain structures and functions constrain each other. Our research particularly focuses on effective connectivity that defines connectivity graphs with information on causal links that may be direct or indirect, unidirectional or bidirectional. The main purpose of our work is to identify interactions between different brain areas from intracerebral recordings during the generation and propagation of seizure onsets, a major issue in the pre-surgical phase of epilepsy surgery treatment. Exploring effective connectivity generally follows two kinds of approaches, model-based techniques and data-driven ones. In this work, we address the question of improving the estimation of information-theoretic quantities, mainly mutual information and transfer entropy, based on k-Nearest Neighbors techniques. The proposed approaches we developed are first evaluated and compared with existing estimators on simulated signals including white noise processes, linear and nonlinear vectorial autoregressive processes, as well as realistic physiology-based models. Some of them are then applied on intracerebral electroencephalographic signals recorded on an epileptic patient, and compared with the well-known directed transfer function. The experimental results show that the proposed techniques improve the estimation of information-theoretic quantities for simulated signals, while the analysis is more difficult in real situations. Globally, the different estimators appear coherent and in accordance with the ground truth given by the clinical experts, the directed transfer function leading to interesting performance.
65

An Efficient Classification Model for Analyzing Skewed Data to Detect Frauds in the Financial Sector / Un modèle de classification efficace pour l'analyse des données déséquilibrées pour détecter les fraudes dans le secteur financier

Makki, Sara 16 December 2019 (has links)
Différents types de risques existent dans le domaine financier, tels que le financement du terrorisme, le blanchiment d’argent, la fraude de cartes de crédit, la fraude d’assurance, les risques de crédit, etc. Tout type de fraude peut entraîner des conséquences catastrophiques pour des entités telles que les banques ou les compagnies d’assurances. Ces risques financiers sont généralement détectés à l'aide des algorithmes de classification. Dans les problèmes de classification, la distribution asymétrique des classes, également connue sous le nom de déséquilibre de classe (class imbalance), est un défi très commun pour la détection des fraudes. Des approches spéciales d'exploration de données sont utilisées avec les algorithmes de classification traditionnels pour résoudre ce problème. Le problème de classes déséquilibrées se produit lorsque l'une des classes dans les données a beaucoup plus d'observations que l’autre classe. Ce problème est plus vulnérable lorsque l'on considère dans le contexte des données massives (Big Data). Les données qui sont utilisées pour construire les modèles contiennent une très petite partie de groupe minoritaire qu’on considère positifs par rapport à la classe majoritaire connue sous le nom de négatifs. Dans la plupart des cas, il est plus délicat et crucial de classer correctement le groupe minoritaire plutôt que l'autre groupe, comme la détection de la fraude, le diagnostic d’une maladie, etc. Dans ces exemples, la fraude et la maladie sont les groupes minoritaires et il est plus délicat de détecter un cas de fraude en raison de ses conséquences dangereuses qu'une situation normale. Ces proportions de classes dans les données rendent très difficile à l'algorithme d'apprentissage automatique d'apprendre les caractéristiques et les modèles du groupe minoritaire. Ces algorithmes seront biaisés vers le groupe majoritaire en raison de leurs nombreux exemples dans l'ensemble de données et apprendront à les classer beaucoup plus rapidement que l'autre groupe. Dans ce travail, nous avons développé deux approches : Une première approche ou classifieur unique basée sur les k plus proches voisins et utilise le cosinus comme mesure de similarité (Cost Sensitive Cosine Similarity K-Nearest Neighbors : CoSKNN) et une deuxième approche ou approche hybride qui combine plusieurs classifieurs uniques et fondu sur l'algorithme k-modes (K-modes Imbalanced Classification Hybrid Approach : K-MICHA). Dans l'algorithme CoSKNN, notre objectif était de résoudre le problème du déséquilibre en utilisant la mesure de cosinus et en introduisant un score sensible au coût pour la classification basée sur l'algorithme de KNN. Nous avons mené une expérience de validation comparative au cours de laquelle nous avons prouvé l'efficacité de CoSKNN en termes de taux de classification correcte et de détection des fraudes. D’autre part, K-MICHA a pour objectif de regrouper des points de données similaires en termes des résultats de classifieurs. Ensuite, calculez les probabilités de fraude dans les groupes obtenus afin de les utiliser pour détecter les fraudes de nouvelles observations. Cette approche peut être utilisée pour détecter tout type de fraude financière, lorsque des données étiquetées sont disponibles. La méthode K-MICHA est appliquée dans 3 cas : données concernant la fraude par carte de crédit, paiement mobile et assurance automobile. Dans les trois études de cas, nous comparons K-MICHA au stacking en utilisant le vote, le vote pondéré, la régression logistique et l’algorithme CART. Nous avons également comparé avec Adaboost et la forêt aléatoire. Nous prouvons l'efficacité de K-MICHA sur la base de ces expériences. Nous avons également appliqué K-MICHA dans un cadre Big Data en utilisant H2O et R. Nous avons pu traiter et analyser des ensembles de données plus volumineux en très peu de temps / There are different types of risks in financial domain such as, terrorist financing, money laundering, credit card fraudulence and insurance fraudulence that may result in catastrophic consequences for entities such as banks or insurance companies. These financial risks are usually detected using classification algorithms. In classification problems, the skewed distribution of classes also known as class imbalance, is a very common challenge in financial fraud detection, where special data mining approaches are used along with the traditional classification algorithms to tackle this issue. Imbalance class problem occurs when one of the classes have more instances than another class. This problem is more vulnerable when we consider big data context. The datasets that are used to build and train the models contain an extremely small portion of minority group also known as positives in comparison to the majority class known as negatives. In most of the cases, it’s more delicate and crucial to correctly classify the minority group rather than the other group, like fraud detection, disease diagnosis, etc. In these examples, the fraud and the disease are the minority groups and it’s more delicate to detect a fraud record because of its dangerous consequences, than a normal one. These class data proportions make it very difficult to the machine learning classifier to learn the characteristics and patterns of the minority group. These classifiers will be biased towards the majority group because of their many examples in the dataset and will learn to classify them much faster than the other group. After conducting a thorough study to investigate the challenges faced in the class imbalance cases, we found that we still can’t reach an acceptable sensitivity (i.e. good classification of minority group) without a significant decrease of accuracy. This leads to another challenge which is the choice of performance measures used to evaluate models. In these cases, this choice is not straightforward, the accuracy or sensitivity alone are misleading. We use other measures like precision-recall curve or F1 - score to evaluate this trade-off between accuracy and sensitivity. Our objective is to build an imbalanced classification model that considers the extreme class imbalance and the false alarms, in a big data framework. We developed two approaches: A Cost-Sensitive Cosine Similarity K-Nearest Neighbor (CoSKNN) as a single classifier, and a K-modes Imbalance Classification Hybrid Approach (K-MICHA) as an ensemble learning methodology. In CoSKNN, our aim was to tackle the imbalance problem by using cosine similarity as a distance metric and by introducing a cost sensitive score for the classification using the KNN algorithm. We conducted a comparative validation experiment where we prove the effectiveness of CoSKNN in terms of accuracy and fraud detection. On the other hand, the aim of K-MICHA is to cluster similar data points in terms of the classifiers outputs. Then, calculating the fraud probabilities in the obtained clusters in order to use them for detecting frauds of new transactions. This approach can be used to the detection of any type of financial fraud, where labelled data are available. At the end, we applied K-MICHA to a credit card, mobile payment and auto insurance fraud data sets. In all three case studies, we compare K-MICHA with stacking using voting, weighted voting, logistic regression and CART. We also compared with Adaboost and random forest. We prove the efficiency of K-MICHA based on these experiments
66

Using supervised learning methods to predict the stop duration of heavy vehicles.

Oldenkamp, Emiel January 2020 (has links)
In this thesis project, we attempt to predict the stop duration of heavy vehicles using data based on GPS positions collected in a previous project. All of the training and prediction is done in AWS SageMaker, and we explore possibilities with Linear Learner, K-Nearest Neighbors and XGBoost, all of which are explained in this paper. Although we were not able to construct a production-grade model within the time frame of the thesis, we were able to show that the potential for such a model does exist given more time, and propose some suggestions for the paths one can take to improve on the endpoint of this project.
67

Detekce přítomnosti osob pomocí IoT senzorů / Room Occupancy Detection with IoT Sensors

Kolarčík, Tomáš January 2021 (has links)
The aim of this work was to create a module for home automation tools Home Assistant. The module is able to determine  which room is inhabited and estimate more accurate position of people inside the room. Known GPS location cannot be used for this purpose because it is inaccurate inside buildings and therefore one of the indoor location techniques needs to be used. Solution based on Bluetooth Low Energy wireless technology was chosen. The localization technique is the fingerprinting method, which is based on estimating the position according to the signal strength at any point in space, which are compared with a database of these points using machine learning. The system can be supplemented with motion sensors that ensure a quick response when entering the room. This system can be deployed within a house, apartment or small to medium-sized company to determine the position of people in the building and can serve as a very powerful element of home automation.
68

Kombination von terrestrischen Aufnahmen und Fernerkundungsdaten mit Hilfe der kNN-Methode zur Klassifizierung und Kartierung von Wäldern

Stümer, Wolfgang 24 August 2004 (has links)
Bezüglich des Waldes hat sich in den letzten Jahren seitens der Politik und Wirtschaft ein steigender Informationsbedarf entwickelt. Zur Bereitstellung dieses Bedarfes stellt die Fernerkundung ein wichtiges Hilfsmittel dar, mit dem sich flächendeckende Datengrundlagen erstellen lassen. Die k-nächsten-Nachbarn-Methode (kNN-Methode), die terrestrische Aufnahmen mit Fernerkundungsdaten kombiniert, stellt eine Möglichkeit dar, diese Datengrundlage mit Hilfe der Fernerkundung zu verwirklichen. Deshalb beschäftigt sich die vorliegende Dissertation eingehend mit der kNN-Methode. An Hand der zwei Merkmale Grundfläche (metrische Daten) und Totholz (kategoriale Daten) wurden umfangreiche Berechnungen durchgeführt, wobei verschiedenste Variationen der kNN-Methode berücksichtigt wurden. Diese Variationen umfassen verschiedenste Einstellungen der Distanzfunktion, der Wichtungsfunktion und der Anzahl k-nächsten Nachbarn. Als Fernerkundungsdatenquellen kamen Landsat- und Hyperspektraldaten zum Einsatz, die sich sowohl von ihrer spektralen wie auch ihrer räumlichen Auflösung unterscheiden. Mit Hilfe von Landsat-Szenen eines Gebietes von verschiedenen Zeitpunkten wurde außerdem der multitemporale Ansatz berücksichtigt. Die terrestrische Datengrundlage setzt sich aus Feldaufnahmen mit verschiedenen Aufnahmedesigns zusammen, wobei ein wichtiges Kriterium die gleichmäßige Verteilung von Merkmalswerten (z.B. Grundflächenwerten) über den Merkmalsraum darstellt. Für die Durchführung der Berechnungen wurde ein Programm mit Visual Basic programmiert, welches mit der Integrierung aller Funktionen auf der Programmoberfläche eine benutzerfreundliche Bedienung ermöglicht. Die pixelweise Ausgabe der Ergebnisse mündete in detaillierte Karten und die Verifizierung der Ergebnisse wurde mit Hilfe des prozentualen Root Mean Square Error und der Bootstrap-Methode durchgeführt. Die erzielten Genauigkeiten für das Merkmal Grundfläche liegen zwischen 35 % und 67 % (Landsat) bzw. zwischen 65 % und 67 % (HyMapTM). Für das Merkmal Totholz liegen die Übereinstimmungen zwischen den kNN-Schätzern und den Referenzwerten zwischen 60,0 % und 73,3 % (Landsat) und zwischen 60,0 % und 63,3 % (HyMapTM). Mit den erreichten Genauigkeiten bietet sich die kNN-Methode für die Klassifizierung von Beständen bzw. für die Integrierung in Klassifizierungsverfahren an. / Mapping forest variables and associated characteristics is fundamental for forest planning and management. The following work describes the k-nearest neighbors (kNN) method for improving estimations and to produce maps for the attributes basal area (metric data) and deadwood (categorical data). Several variations within the kNN-method were tested, including: distance metric, weighting function and number of neighbors. As sources of remote sensing Landsat TM satellite images and hyper spectral data were used, which differ both from their spectral as well as their spatial resolutions. Two Landsat scenes from the same area acquired September 1999 and 2000 regard multiple approaches. The field data for the kNN- method comprise tree field measurements which were collected from the test site Tharandter Wald (Germany). The three field data collections are characterized by three different designs. For the kNN calculation a program with integration all kNN functions were developed. The relative root mean square errors (RMSE) and the Bootstrap method were evaluated in order to find optimal parameters. The estimation accuracy for the attribute basal area is between 35 % and 67 % (Landsat) and 65 % and 67 % (HyMapTM). For the attribute deadwood is the accuracy between 60 % and 73 % (Landsat) and 60 % and 63 % (HyMapTM). Recommendations for applying the kNN method for mapping and regional estimation are provided.
69

Data Driven Energy Efficiency of Ships

Taspinar, Tarik January 2022 (has links)
Decreasing the fuel consumption and thus greenhouse gas emissions of vessels has emerged as a critical topic for both ship operators and policy makers in recent years. The speed of vessels has long been recognized to have highest impact on fuel consumption. The solution suggestions like "speed optimization" and "speed reduction" are ongoing discussion topics for International Maritime Organization. The aim of this study are to develop a speed optimization model using time-constrained genetic algorithms (GA). Subsequent to this, this paper also presents the application of machine learning (ML) regression methods in setting up a model with the aim of predicting the fuel consumption of vessels. Local outlier factor algorithm is used to eliminate outlier in prediction features. In boosting and tree-based regression prediction methods, the overfitting problem is observed after hyperparameter tuning. Early stopping technique is applied for overfitted models.In this study, speed is also found as the most important feature for fuel consumption prediction models. On the other hand, GA evaluation results showed that random modifications in default speed profile can increase GA performance and thus fuel savings more than constant speed limits during voyages. The results of GA also indicate that using high crossover rates and low mutations rates can increase fuel saving.Further research is recommended to include fuel and bunker prices to determine more accurate fuel efficiency.
70

Neural Networks for Modeling of Electrical Parameters and Losses in Electric Vehicle

Fujimoto, Yo January 2023 (has links)
Permanent magnet synchronous machines have various advantages and have showed the most superiorperformance for Electric Vehicles. However, modeling them is difficult because of their nonlinearity. In orderto deal with the complexity, the artificial neural network and machine learning models including k-nearest neighbors, decision tree, random forest, and multiple linear regression with a quadratic model are developed to predict electrical parameters and losses as new prediction approaches for the performance of Volvo Cars’ electric vehicles and evaluate their performance. The test operation data of the Volvo Cars Corporation was used to extract and calculate the input and output data for each prediction model. In order to smooth the effects of each input variable, the input data was normalized. In addition, correlation matrices of normalized inputs were produced, which showed a high correlation between rotor temperature and winding resistance in the electrical parameter prediction dataset. They also demonstrated a strong correlation between the winding temperature and the rotor temperature in the loss prediction dataset.Grid search with 5-fold cross validation was implemented to optimize hyperparameters of artificial neuralnetwork and machine learning models. The artificial neural network models performed the best in MSE and R-squared scores for all the electrical parameters and loss prediction. The results indicate that artificial neural networks are more successful at handling complicated nonlinear relationships like those seen in electrical systems compared with other machine learning algorithms. Compared to other machine learning algorithms like decision trees, k-nearest neighbors, and multiple linear regression with a quadratic model, random forest produced superior results. With the exception of q-axis voltage, the decision tree model outperformed the knearestneighbors model in terms of parameter prediction, as measured by MSE and R-squared score. Multiple linear regression with a quadratic model produced the worst results for the electric parameters prediction because the relationship between the input and output was too complex for a multiple quadratic equation to deal with. Random forest models performed better than decision tree models because random forest ensemblehundreds of subset of decision trees and averaging the results. The k-nearest neighbors performed worse for almost all electrical parameters anticipation than the decision tree because it simply chooses the closest points and uses the average as the projected outputs so it was challenging to forecast complex nonlinear relationships. However, it is helpful for handling simple relationships and for understanding relationships in data. In terms of loss prediction, the k-nearest neighbors and decision tree produced similar results in MSE and R-squared score for the electric machine loss and the inverter loss. Their prediction results were worse than the multiple linear regression with a quadratic model, but they performed better than the multiple linear regression with a quadratic model, for forecasting the power difference between electromagnetic power and mechanical power.

Page generated in 0.0692 seconds