• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 2
  • 2
  • 1
  • Tagged with
  • 44
  • 44
  • 44
  • 20
  • 19
  • 11
  • 11
  • 10
  • 10
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Constrained Gaussian Process Regression Applied to the Swaption Cube / Regression för gaussiska processer med bivillkor tillämpad på Swaption-kuben

Deleplace, Adrien January 2021 (has links)
This document is a Master Thesis report in financial mathematics for KTH. This Master thesis is the product of an internship conducted at Nexialog Consulting, in Paris. This document is about the innovative use of Constrained Gaussian process regression in order to build an arbitrage free swaption cube. The methodology introduced in the document is used on a data set of European Swaptions Out of the Money. / Det här dokumentet är en magisteruppsats i finansiel matematik på KTH. Detta examensarbete är resultatet av en praktik som ufördes på Nexialog Consulting i Paris.Detta dokument handlar om den innovativa användningen av regression för gaussiska processer med bivillkor för att bygga en arbitragefri swaption kub. Den metodik som introduceras i dokumentet används på en datamängd av europeiska swaptions som är "Out of the Money".
32

Ranging Error Correction in a Narrowband, Sub-GHz, RF Localization System / Felkorrigering av avståndsmätingar i ett narrowband, sub-GHz, RF-baserat positioneringssystem

Barrett, Silvia January 2023 (has links)
Being able to keep track of ones assets is a very useful thing, from avoiding losing ones keys or phone to being able to find the needed equipment in a busy hospital or on a construction site. The area of localization is actively evolving to find the best ways to accurately track objects and devices in an energy efficient manner, at any range, and in any type of environment. This thesis focuses on the last aspect of maintaining accurate localization regardless of environment. For radio frequency based systems, challenging environments containing many obstacles, e.g., indoor or urban areas, have a detrimental effect on the measurements used for positioning, making them deceptive. In this work, a method for correcting range measurements is proposed for a narrowband sub-GHz radio frequency based localization system using Received Signal Strength Indicator (RSSI) and Time-of-Flight (ToF) measurements for positioning. Three different machine learning models were implemented: a linear regressor, a least squares support vector machine regressor and a gaussian process regressor. They were compared in their ability to predict the true range between devices based on raw range measurements. Achieved was a 69.96 % increase in accuracy compared to uncorrected ToF estimates and a 88.74 % increase in accuracy compared to RSSI estimates. When the corrected range estimates were used for positioning with a trilateration algorithm using least squares estimation, a 67.84 % increase in accuracy was attained compared to positioning with uncorrected range estimates. This shows that this is an effective method of improving range estimates to facilitate more accurate positioning. / Att kunna hålla reda på var ens tillgångar befinner sig kan vara mycket användbart, från att undvika att ens nycklar eller telefon tappas bort till att kunna hitta utrustningen man behöver i ett myllrande sjukhus eller på en byggarbetsplats. Området av lokalisering utvecklas aktivt för att hitta de bästa metoderna och teknologierna för att med precision kunna spåra fysiska objekt på ett energieffektivt sätt, på vilken räckvidd som helst, och i vilken miljö som helst. Detta arbete fokuserar på den sista aspekten av att uppnå precis positionering oavsett miljö. För radiofrekvensbaserade system har utmanande miljöer med många fysiska hinder som till exempel inomhus och stadsområden en negativ effekt på de mätningar som används för positionering, vilket gör dem vilseledande. I detta arbete föreslås en metod för att korrigera avståndsmätningar i ett narrowband sub-GHz radiofrekvensbaserat lokaliseringssystem som använder Received Signal Strength Indicator (RSSI)- och Time-of-Flight (ToF)-mätningar för positionering. Tre olika maskininlärningsmodeller har implementerats: en linear regressor, en least squares support vector machine regressor och en gaussian process regressor. Dessa jämfördes i sin förmåga att förutspå det sanna avståndet mellan enheter baserat på råa avståndsmätningar. De korrigerade avståndsmätningarna uppnådde 69.96 % högre nogrannhet jämfört med okorrigerade ToF-uppskattningar och 88.74 % högre nogrannhet jämfört med RSSI-uppskattningar. Avståndsuppskattningarna användes för positionering med trilateration och minsta kvadratmetoden. De korrigerade uppskattningarna gav 67.84 % mer precis positionering jämfört med de okorrigerde uppskattningarna. Detta visar att detta är en effektiv metod förbättra avståndsuppskattningarna för att i sin tur bidra till mer exakt positionering.
33

Using AI to improve the effectiveness of turbine performance data

Shreyas Sudarshan Supe (17552379) 06 December 2023 (has links)
<p dir="ltr">For turbocharged engine simulation analysis, manufacturer-provided data are typically used to predict the mass flow and efficiency of the turbine. To create a turbine map, physical tests are performed in labs at various turbine speeds and expansion ratios. These tests can be very expensive and time-consuming. Current testing methods can have limitations that result in errors in the turbine map. As such, only a modest set of data can be generated, all of which have to be interpolated and extrapolated to create a smooth surface that can then be used for simulation analysis.</p><p><br></p><p dir="ltr">The current method used by the manufacturer is a physics-informed polynomial regression model that depends on the Blade Speed Ratio (BSR ) in the polynomial function to model the efficiency and MFP. This method is memory-consuming and provides a lower-than-desired accuracy. This model is decades old and must be updated with new state-of-the-art Machine Learning models to be more competitive. Currently, CTT is facing up to +/-2% error in most turbine maps for efficiency and MFP and the aim is to decrease the error to 0.5% while interpolating the data points in the available region. The current model also extrapolates data to regions where experimental data cannot be measured. Physical tests cannot validate this extrapolation and can only be evaluated using CFD analysis.</p><p><br></p><p dir="ltr">The thesis focuses on investigating different AI techniques to increase the accuracy of the model for interpolation and evaluating the models for extrapolation. The data was made available by CTT. The available data consisted of various turbine parameters including ER, turbine speeds, efficiency, and MFP which were considered significant in turbine modeling. The AI models developed contained the above 4 parameters where ER and turbine speeds are predictors and, efficiency and MFP are the response. Multiple supervised ML models such as SVM, GPR, LMANN, BRANN, and GBPNN were developed and evaluated. From the above 5 ML models, BRANN performed the best achieving an error of 0.5% across multiple turbines for efficiency and MFP. The same model was used to demonstrate extrapolation, where the model gave unreliable predictions. Additional data points were inputted in the training data set at the far end of the testing regions which greatly increased the overall look of the map.</p><p><br></p><p dir="ltr">An additional contribution presented here is to completely predict an expansion ratio line and evaluate with CTT test data points where the model performed with an accuracy of over 95%. Since physical testing in a lab is expensive and time-consuming, another goal of the project was to reduce the number of data points provided for ANN model training. Furthermore, strategically reducing the data points is of utmost importance as some data points play a major role in the training of ANN and can greatly affect the model's overall accuracy. Up to 50% of the data points were removed for training inputs and it was found that BRANN was able to predict a satisfactory turbine map while reducing 20% of the overall data points at various regions.</p>
34

Extending standard outdoor noise propagation models to complex geometries / Extension des modèles standards de propagation du bruit extérieur pour des géométries complexes

Kamrath, Matthew 28 September 2017 (has links)
Les méthodes d'ingénierie acoustique (e.g. ISO 9613-2 ou CNOSSOS-EU) approchent efficacement les niveaux de bruit générés par les routes, les voies ferrées et les sources industrielles en milieu urbain. Cependant, ces approches d'ingénierie sont limitées à des géométries de forme simple, le plus souvent de section rectangulaire. Ce mémoire développe donc, et valide, une approche hybride permettant l'extension des méthodes d'ingénierie à des formes plus complexes, en introduisant un terme d’atténuation supplémentaire qui représente l'effet d'un objet réel comparé à un objet simple.Le calcul de cette atténuation supplémentaire nécessite des calculs de référence, permettant de quantifier la différence entre objets simple et complexe. Dans la mesure où il est trop onéreux, numériquement, '’effectuer ce calcul pour tous les chemins de propagation, l'atténuation supplémentaire est obtenue par interpolation de données stockées dans un tableau et évaluées pour un large jeu de positions de sources, de récepteurs et de fréquences. Dans notre approche, le calcul de référence utilise la méthode BEM en 2.5D, et permet ainsi de produire les niveaux de référence pour les géométries simple et complexe, tout en tabulant leur écart. Sur le principe, d'autres approches de référence pourraient être utilisées.Ce travail valide cette approche hybride pour un écran en forme de T avec un sol rigide, un sol absorbant et un cas avec bâtiments. Ces trois cas démontrent que l'approche hybride est plus précise que l'approche d’ingénierie standard dans des cas complexes. / Noise engineering methods (e.g. ISO 9613-2 or CNOSSOS-EU) efficiently approximate sound levels from roads, railways, and industrial sources in cities. However, engineering methods are limited to only simple box-shaped geometries. This dissertation develops and validates a hybrid method to extend the engineering methods to more complicated geometries by introducing an extra attenuation term that represents the influence of a real object compared to a simplified object.Calculating the extra attenuation term requires reference calculations to quantify the difference between the complex and simplified objects. Since performing a reference computation for each path is too computationally expensive, the extra attenuation term is linearly interpolated from a data table containing the corrections for many source and receiver positions and frequencies. The 2.5D boundary element method produces the levels for the real complex geometry and a simplified geometry, and subtracting these levels yields the corrections in the table.This dissertation validates this hybrid method for a T-barrier with hard ground, soft ground, and buildings. All three cases demonstrate that the hybrid method is more accurate than standard engineering methods for complex cases.
35

Remaining useful life estimation of critical components based on Bayesian Approaches. / Prédiction de l'état de santé des composants critiques à l'aide de l'approche Bayesienne

Mosallam, Ahmed 18 December 2014 (has links)
La construction de modèles de pronostic nécessite la compréhension du processus de dégradation des composants critiques surveillés afin d’estimer correctement leurs durées de fonctionnement avant défaillance. Un processus de d´dégradation peut être modélisé en utilisant des modèles de Connaissance issus des lois de la physique. Cependant, cette approche n´nécessite des compétences Pluridisciplinaires et des moyens expérimentaux importants pour la validation des modèles générés, ce qui n’est pas toujours facile à mettre en place en pratique. Une des alternatives consiste à apprendre le modèle de dégradation à partir de données issues de capteurs installés sur le système. On parle alors d’approche guidée par des données. Dans cette thèse, nous proposons une approche de pronostic guidée par des données. Elle vise à estimer à tout instant l’état de santé du composant physique et prédire sa durée de fonctionnement avant défaillance. Cette approche repose sur deux phases, une phase hors ligne et une phase en ligne. Dans la phase hors ligne, on cherche à sélectionner, parmi l’ensemble des signaux fournis par les capteurs, ceux qui contiennent le plus d’information sur la dégradation. Cela est réalisé en utilisant un algorithme de sélection non supervisé développé dans la thèse. Ensuite, les signaux sélectionnés sont utilisés pour construire différents indicateurs de santé représentant les différents historiques de données (un historique par composant). Dans la phase en ligne, l’approche développée permet d’estimer l’état de santé du composant test en faisant appel au filtre Bayésien discret. Elle permet également de calculer la durée de fonctionnement avant défaillance du composant en utilisant le classifieur k-plus proches voisins (k-NN) et le processus de Gauss pour la régression. La durée de fonctionnement avant défaillance est alors obtenue en comparant l’indicateur de santé courant aux indicateurs de santé appris hors ligne. L’approche développée à été vérifiée sur des données expérimentales issues de la plateforme PRO-NOSTIA sur les roulements ainsi que sur des données fournies par le Prognostic Center of Excellence de la NASA sur les batteries et les turboréacteurs. / Constructing prognostics models rely upon understanding the degradation process of the monitoredcritical components to correctly estimate the remaining useful life (RUL). Traditionally, a degradationprocess is represented in the form of physical or experts models. Such models require extensiveexperimentation and verification that are not always feasible in practice. Another approach that buildsup knowledge about the system degradation over time from component sensor data is known as datadriven. Data driven models require that sufficient historical data have been collected.In this work, a two phases data driven method for RUL prediction is presented. In the offline phase, theproposed method builds on finding variables that contain information about the degradation behaviorusing unsupervised variable selection method. Different health indicators (HI) are constructed fromthe selected variables, which represent the degradation as a function of time, and saved in the offlinedatabase as reference models. In the online phase, the method estimates the degradation state usingdiscrete Bayesian filter. The method finally finds the most similar offline health indicator, to the onlineone, using k-nearest neighbors (k-NN) classifier and Gaussian process regression (GPR) to use it asa RUL estimator. The method is verified using PRONOSTIA bearing as well as battery and turbofanengine degradation data acquired from NASA data repository. The results show the effectiveness ofthe method in predicting the RUL.
36

Image Segmentation, Parametric Study, and Supervised Surrogate Modeling of Image-based Computational Fluid Dynamics

Islam, Md Mahfuzul 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / With the recent advancement of computation and imaging technology, Image-based computational fluid dynamics (ICFD) has emerged as a great non-invasive capability to study biomedical flows. These modern technologies increase the potential of computation-aided diagnostics and therapeutics in a patient-specific environment. I studied three components of this image-based computational fluid dynamics process in this work. To ensure accurate medical assessment, realistic computational analysis is needed, for which patient-specific image segmentation of the diseased vessel is of paramount importance. In this work, image segmentation of several human arteries, veins, capillaries, and organs was conducted to use them for further hemodynamic simulations. To accomplish these, several open-source and commercial software packages were implemented. This study incorporates a new computational platform, called InVascular, to quantify the 4D velocity field in image-based pulsatile flows using the Volumetric Lattice Boltzmann Method (VLBM). We also conducted several parametric studies on an idealized case of a 3-D pipe with the dimensions of a human renal artery. We investigated the relationship between stenosis severity and Resistive index (RI). We also explored how pulsatile parameters like heart rate or pulsatile pressure gradient affect RI. As the process of ICFD analysis is based on imaging and other hemodynamic data, it is often time-consuming due to the extensive data processing time. For clinicians to make fast medical decisions regarding their patients, we need rapid and accurate ICFD results. To achieve that, we also developed surrogate models to show the potential of supervised machine learning methods in constructing efficient and precise surrogate models for Hagen-Poiseuille and Womersley flows.
37

Optimal Q-Space Sampling Scheme : Using Gaussian Process Regression and Mutual Information

Hassler, Ture, Berntsson, Jonathan January 2022 (has links)
Diffusion spectrum imaging is a type of diffusion magnetic resonance imaging, capable of capturing very complex tissue structures, but requiring a very large amount of samples in q-space and therefore time.  The purpose of this project was to create and evaluate a new sampling scheme in q-space for diffusion MRI, trying to recreate the ensemble averaged propagator (EAP) with fewer samples without significant loss of quality. The sampling scheme was created by greedily selecting the measurements contributing with the most mutual information. The EAP was then recreated using the sampling scheme and interpolation. The mutual information was approximated using the kernel from a Gaussian process machine learning model.  The project showed limited but promising results on synthetic data, but was highly restricted by the amount of available computational power. Having to resolve to using a lower resolution mesh when calculating the optimal sampling scheme significantly reduced the overall performance.
38

Trajectory Prediction Using Gaussian Process Regression : Estimating Three Dynamical States Using Two Parameters / Positionsprediktering med Gaussisk Process Regression : Estimering av Tre Dynamiska Tillstånd Baserat på Två Parametrar

Hannebo, Ludvig January 2024 (has links)
In this thesis a Gaussian process regression (GPR) model and a Kalman filter (KF) model were developed and applied to a trajectory prediction problem. The main subject of the thesis is GPR, where the intended purpose of the KF is to compare it to the GPR model. The input data for the models consists of two noisy spherical angle coordinates of a moving target relative to a moving guided projectile. In order to perform trajectory predictions the models need to estimate the distance between the target and guided projectile since there are only two coordinates available and an estimation of three coordinates is desired. The distance estimation was done by a Low Speed Approximation. The trajectories investigated were harmonic-exponential, exponential-spiral and linear. The results showed issues with the hyperparameters of the GPR model which may be related to the preprocessing of the trajectory data. However, the GPR model did outperform the KF model when there was acceleration, despite the issues with the hyperparameters. The KF model outperformed the GPR model when the target trajectory behaved linearly. The results indicate that GPR has potential as a trajectory prediction algorithm. / I denna avhandling utvecklades och tillämpades en Gaussisk process regression (GPR)-modell och en Kalman Filter (KF)-modell på ett positionspredikteringsproblem. Huvudämnet för avhandlingen är GPR medan det avsedda syftet med KF är att jämföra den med GPR-modellen. Modellernas indata består av två brusiga sfäriska vinkelkoordinater av ett rörligt mål i förhållande till en styrd projektil. För att modellerna ska kunna utföra positionsprediktering så behöver avståndet mellan målet och den styrda projektilen skattas, eftersom det endast finns två tillgängliga koordinater och en uppskattning av tre koordinater önskas. Avståndsberäkningen gjordes baserat på ett antagande om att hastigheten för målet är liten relativt hastigheten för den styrda projektilen, i avhandlingen är denna approximation benämnd Low Speed Approximation. De undersökta banorna var harmonisk-exponentiell, exponentiell-spiral och linjär. Resultaten visade problem med hyperparametrarna för GPR-modellen, vilket kan vara relaterat till förbehandlingen av bandatan. Trots problem med hyperparametrarna så presterade GPR-modellen bättre än KF-modellen när det fanns acceleration. KF-modellen presterade bättre än GPR-modellen när målets bana betedde sig linjärt. Resultaten indikerar att GPR har potential som en algoritm för positionsprediktering.
39

Analyzing Lower Limb Motion Capture with Smartphone : Possible improvements using machine learning / Analys av rörelsefångst för nedre extremiteterna med smartphone : Möjliga förbättringar med hjälp av maskininlärning

Brink, Anton January 2024 (has links)
Human motion analysis (HMA) can play a crucial role in sports and healthcare by providing unique insights on movement mechanics in the form of objective measurements and quantitative data. Traditional, state of the art, marker-based techniques, despite their accuracy, come with financial and logistical barriers, and are restricted to laboratory settings. Markerless systems offer much improved affordability and portability, and can potentially be used outside of laboratories. However, these advantages come with a significant cost in accuracy. This thesis attempts to address the challenge of democratizing HMA by leveraging recent advances in smartphone technology and machine learning.\newline\newlineThis thesis evaluates two modalities of performing markerless HMA: Single smartphone using Apple Arkit, and multiple smartphone setup using OpenCap, and compares both to a state of the art multiple-camera marker-based system from Vicon. Additionally, this thesis presents and evaluates two approaches to improving the single smartphone modality: Employing a Gaussian Process Model (GPR), and a Long-short-term-memory (LSTM) neural network to refine the single smartphone data to align with the marker-based result. Specific movements were recorded simultaneously with all three modalities on 13 subjects to build a dataset. From this, GPR and LSTM models were trained and applied to refine the single camera modality data. Lower limb joint angles, and joint centers were evaluated across the different modalities, and analyzed for potential use in real-world applications. While the findings of this thesis are promising, as both the GPR and LSTM models improve the accuracy of Apple Arkit, and OpenCap providing accurate and consistent results. It is important to acknowledge limitations regarding demographic diversity and how real-world environmental factors may influence its application. This thesis contributes to the efforts in narrowing the gap between marker-based HMA methods, and more accessible solutions. / Rörelseanalys av människokroppen (HMA) kan spela en betydelsefull roll i både idrott och hälso- och sjukvården. Genom objektiv och kvantitativ data ger den unik insikt i mekaniken bakom rörelser. Traditionella, toppmoderna, markör-baserade tekniker är mycket precisa, men medför finansiella och logistikbaserade barriärer, och finns endast tillgängliga i laboratorier. Markör-fria system erbjuder mycket bättre pris, portabilitet och kan potentiellt användas utanför laboratorier. Dessa fördelar går dock hand i hand med en betydande minskning av nogrannhet. Denna avhandling försöker ta itu med utmaningen att demokratisera HMA genom att utnyttja de senaste framstegen inom smartphoneteknik och maskininlärning. Denna avhandling utvärderar två sätt att utföra markör-fri HMA: Genom att använda en smartphone som kör Apple Arkit, och en uppsättning med flera smartphones som kör OpenCap. Båda modaliteter jämförs med ett markör-baserat system som använder flera kameror, från Vicon. Dessutom presenteras och utvärderas två metoder för att förbättra modaliteten med endast en smartphone: Användning av en Gaussisk Process modell för Regression (GPR) och ett Long-short-term-memory (LSTM) neuronnät för att förbättra data från en smartphone modalititeten, så att det bättre överenstämmer med det markör-baserade resultatet. Specifika rörelser spelades in samtidigt med alla tre modaliteter på 13 försökspersoner för att bygga upp ett dataset. Utifrån detta tränades GPR- och LSTM-modeller och användas för att förbättra data från en kamera modaliteten (Apple Arkit). Ledvinklar och ledcentra för de nedre extremiteterna utvärderades i de olika modaliteterna och analyserades för potentiell använding i verkliga tillämpningar. Även om resultaten av denna avhandling är lovande, då både GPR- och LSTM-modellerna förbättrar nogrannheten hos Apple Arkit, och OpenCap ger korrekta och konsekventa resultat, så är det viktigt att erkänna begränsningarna när det gäller demografisk mångfald och hur miljöfaktorer i verkligheten kan påverka tillämpningen.
40

CONSTRUCTION EQUIPMENT FUEL CONSUMPTION DURING IDLING : Characterization using multivariate data analysis at Volvo CE

Hassani, Mujtaba January 2020 (has links)
Human activities have increased the concentration of CO2 into the atmosphere, thus it has caused global warming. Construction equipment are semi-stationary machines and spend at least 30% of its life time during idling. The majority of the construction equipment is diesel powered and emits toxic emission into the environment. In this work, the idling will be investigated through adopting several statistical regressions models to quantify the fuel consumption of construction equipment during idling. The regression models which are studied in this work: Multivariate Linear Regression (ML-R), Support Vector Machine Regression (SVM-R), Gaussian Process regression (GP-R), Artificial Neural Network (ANN), Partial Least Square Regression (PLS-R) and Principal Components Regression (PC-R). Findings show that pre-processing has a significant impact on the goodness of the prediction of the explanatory data analysis in this field. Moreover, through mean centering and application of the max-min scaling feature, the accuracy of models increased remarkably. ANN and GP-R had the highest accuracy (99%), PLS-R was the third accurate model (98% accuracy), ML-R was the fourth-best model (97% accuracy), SVM-R was the fifth-best (73% accuracy) and the lowest accuracy was recorded for PC-R (83% accuracy). The second part of this project estimated the CO2 emission based on the fuel used and by adopting the NONROAD2008 model.  Keywords:

Page generated in 0.0739 seconds