• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 3
  • 2
  • 1
  • Tagged with
  • 30
  • 17
  • 14
  • 11
  • 11
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Vehicle Speed Estimation for Articulated Heavy-Duty Vehicles

Rombach, Markus January 2018 (has links)
Common trends in the vehicle industry are semiautonomous functions and autonomous solutions. This new type of functionality sets high requirements on the knowledge about the state of the vehicle. A precise vehicle speed is important for many functions, and one example is the positioning system which often is reliant on an accurate speed estimation. This thesis investigates how an IMU (Inertial Measurement Unit), consisting of a gyroscope and an accelerometer, can support the vehicle speed estimation from wheel speed sensors. The IMU was for this purpose mounted on a wheelloader. To investigate the speed estimation EKFs (Extended Kalman Filters) with different vehicle and sensor models are implemented. Furthermore all filters are extended to Kalman smoothers. First an analysis of the sensors was performed. The EKFs were then developed and verified using a simulation model developed by Volvo Construction Equipment. The filters were also implemented on the wheel loader and tested on data collected from real world scenarios.
2

The Estimation Methods for an Integrated INS/GPS UXO Geolocation System

Lee, Jong Ki January 2009 (has links)
No description available.
3

Bandwidth Selection Concerns for Jump Point Discontinuity Preservation in the Regression Setting Using M-smoothers and the Extension to hypothesis Testing

Burt, David Allan 31 March 2000 (has links)
Most traditional parametric and nonparametric regression methods operate under the assumption that the true function is continuous over the design space. For methods such as ordinary least squares polynomial regression and local polynomial regression the functional estimates are constrained to be continuous. Fitting a function that is not continuous with a continuous estimate will have practical scientific implications as well as important model misspecification effects. Scientifically, breaks in the continuity of the underlying mean function may correspond to specific physical phenomena that will be hidden from the researcher by a continuous regression estimate. Statistically, misspecifying a mean function as continuous when it is not will result in an increased bias in the estimate. One recently developed nonparametric regression technique that does not constrain the fit to be continuous is the jump preserving M-smooth procedure of Chu, Glad, Godtliebsen & Marron (1998),`Edge-preserving smoothers for image processing', Journal of the American Statistical Association 93(442), 526-541. Chu et al.'s (1998) M-smoother is defined in such a way that the noise about the mean function is smoothed out while jumps in the mean function are preserved. Before the jump preserving M-smoother can be used in practice the choice of the bandwidth parameters must be addressed. The jump preserving M-smoother requires two bandwidth parameters h and g. These two parameters determine the amount of noise that is smoothed out as well as the size of the jumps which are preserved. If these parameters are chosen haphazardly the resulting fit could exhibit worse bias properties than traditional regression methods which assume a continuous mean function. Currently there are no automatic bandwidth selection procedures available for the jump preserving M-smoother of Chu et al. (1998). One of the main objectives of this dissertation is to develop an automatic data driven bandwidth selection procedure for Chu et al.'s (1998) M-smoother. We actually present two bandwidth selection procedures. The first is a crude rule of thumb method and the second is a more sophistocated direct plug in method. Our bandwidth selection procedures are modeled after the methods of Chu et al. (1998) with two significant modifications which make the methods robust to possible jump points. Another objective of this dissertation is to provide a nonparametric hypothesis test, based on Chu et al.'s (1998) M-smoother, to test for a break in the continuity of an underlying regression mean function. Our proposed hypothesis test is nonparametric in the sense that the mean function away from the jump point(s) is not required to follow a specific parametric model. In addition the test does not require the user to specify the number, position, or size of the jump points in the alternative hypothesis as do many current methods. Thus the null and alternative hypotheses for our test are: H0: The mean function is continuous (i.e. no jump points) vs. HA: The mean function is not continuous (i.e. there is at least one jump point). Our testing procedure takes the form of a critical bandwidth hypothesis test. The test statistic is essentially the largest bandwidth that allows Chu et al.'s (1998) M-smoother to satisfy the null hypothesis. The significance of the test is then calculated via a bootstrap method. This test is currently in the experimental stage of its development. In this dissertation we outline the steps required to calculate the test as well as assess the power based on a small simulation study. Future work such as a faster calculation algorithm is required before the testing procedure will be practical for the general user. / Ph. D.
4

Méthodes numériques pour les problèmes des moindres carrés, avec application à l'assimilation de données / Numerical methods for least squares problems with application to data assimilation

Bergou, El Houcine 11 December 2014 (has links)
L'algorithme de Levenberg-Marquardt (LM) est parmi les algorithmes les plus populaires pour la résolution des problèmes des moindres carrés non linéaire. Motivés par la structure des problèmes de l'assimilation de données, nous considérons dans cette thèse l'extension de l'algorithme LM aux situations dans lesquelles le sous problème linéarisé, qui a la forme min||Ax - b ||^2, est résolu de façon approximative, et/ou les données sont bruitées et ne sont précises qu'avec une certaine probabilité. Sous des hypothèses appropriées, on montre que le nouvel algorithme converge presque sûrement vers un point stationnaire du premier ordre. Notre approche est appliquée à une instance dans l'assimilation de données variationnelles où les modèles stochastiques du gradient sont calculés par le lisseur de Kalman d'ensemble (EnKS). On montre la convergence dans L^p de l'EnKS vers le lisseur de Kalman, quand la taille de l'ensemble tend vers l'infini. On montre aussi la convergence de l'approche LM-EnKS, qui est une variante de l'algorithme de LM avec l'EnKS utilisé comme solveur linéaire, vers l'algorithme classique de LM ou le sous problème est résolu de façon exacte. La sensibilité de la méthode de décomposition en valeurs singulières tronquée est étudiée. Nous formulons une expression explicite pour le conditionnement de la solution des moindres carrés tronqués. Cette expression est donnée en termes de valeurs singulières de A et les coefficients de Fourier de b. / The Levenberg-Marquardt algorithm (LM) is one of the most popular algorithms for the solution of nonlinear least squares problems. Motivated by the problem structure in data assimilation, we consider in this thesis the extension of the LM algorithm to the scenarios where the linearized least squares subproblems, of the form min||Ax - b ||^2, are solved inexactly and/or the gradient model is noisy and accurate only within a certain probability. Under appropriate assumptions, we show that the modified algorithm converges globally and almost surely to a first order stationary point. Our approach is applied to an instance in variational data assimilation where stochastic models of the gradient are computed by the so-called ensemble Kalman smoother (EnKS). A convergence proof in L^p of EnKS in the limit for large ensembles to the Kalman smoother is given. We also show the convergence of LM-EnKS approach, which is a variant of the LM algorithm with EnKS as a linear solver, to the classical LM algorithm where the linearized subproblem is solved exactly. The sensitivity of the trucated sigular value decomposition method to solve the linearized subprobems is studied. We formulate an explicit expression for the condition number of the truncated least squares solution. This expression is given in terms of the singular values of A and the Fourier coefficients of b.
5

[pt] AVALIANDO O USO DO ALGORITMO RANDOM FOREST PARA SIMULAÇÃO EM RESERVATÓRIOS MULTI-REGIÕES / [en] EVALUATING THE USE OF RANDOM FOREST REGRESSOR TO RESERVOIR SIMULATION IN MULTI-REGION RESERVOIRS

IGOR CAETANO DINIZ 22 June 2023 (has links)
[pt] Simulação de reservatórios de óleo e gás é uma demanda comum em engenharia de petróleo e pesquisas relacionadas, que pode requerer um elevado custo computacional de tempo e processamento ao resolver um problema matemático. Além disso, alguns métodos de caracterização de reservatórios necessitam múltiplas iterações, resultando em muitas simulações para obter um resultado. Também podemos citar os métodos baseados em conjunto, tais como o ensemble Kalman filter, o EnKF, e o Ensemble Smoother With Multiple Data Assimilation,o ES-MDA, que requerem muitas simulações. Em contrapartida, o uso de aprendizado de máquina cresceu bastante na indústria de energia. Isto pode melhorar a acurácia de predição, otimizar estratégias e outros. Visando reduzir as complexidades de simulação de reservatórios, este trabalho investiga o uso de aprendizado de máquina como uma alternativa a simuladores convencionais. O modelo Random Forest Regressor é testado para reproduzir respostas de pressão em um reservatório multi-região radial composto. Uma solução analítica é utilizada para gerar o conjunto de treino e teste para o modelo. A partir de experimentação e análise, este trabalho tem o objetivo de suplementar a utilização de aprendizado de máquina na indústria de energia. / [en] Oil and gas reservoir simulation is a common demand in petroleum engineering, and research, which may have a high computational cost, solving a mathematical numeric problem, or high computational time. Moreover, several reservoir characterization methods require multiple iterations, resulting in many simulations to obtain a reasonable characterization. It is also possible to mention ensemble-based methods, such as the ensemble Kalman filter, EnKF, and the Ensemble Smoother With Multiple Data Assimilation, ES-MDA, which demand lots of simulation runs to provide the output result. As a result, reservoir simulation might be a complex subject to deal with when working with reservoir characterization. The use of machine learning has been increasing in the energy industry. It can improve the accuracy of reservoir predictions, optimize production strategies, and many other applications. The complexity and uncertainty of reservoir models pose significant challenges to traditional modeling approaches, making machine learning an attractive solution. Aiming to reduce reservoir simulation’s complexities, this work investigates using a machine-learning model as an alternative to conventional simulators. The Random Forest regressor model is experimented with to reproduce pressure response solutions for multi-region radial composite reservoirs. An analytical approach is employed to create the training dataset in the following procedure: the permeability is sorted using a specific distribution, and the output is generated using the analytical solution. Through experimentation and analysis, this work aims to advance our understanding of using machine learning in reservoir simulation for the energy industry.
6

[en] EVALUATING THE IMPACT OF THE INFLATION FACTORS GENERATION FOR THE ENSEMBLE SMOOTHER WITH MULTIPLE DATA ASSIMILATION / [pt] INVESTIGANDO O IMPACTO DA GERAÇÃO DOS FATORES DE INFLAÇÃO PARA O ENSEMBLE SMOOTHER COM MÚLTIPLA ASSIMILAÇÃO DE DADOS

THIAGO DE MENEZES DUARTE E SILVA 09 September 2021 (has links)
[pt] O ensemble smoother with multiple data assimilation (ES-MDA) se tornou um poderoso estimador de parâmetros. A principal ideia do ES-MDA é assimilar os mesmos dados com a matriz de covariância dos erros dos dados inflada. Na implementação original do ES-MDA, os fatores de inflação e o número de assimilações são escolhidos a priori. O único requisito é que a soma dos inversos de tais fatores seja igual a um. Naturalmente, escolhendo-os iguais ao número de assimilações cumpre este requerimento. Contudo, estudos recentes mostraram uma relação entre a equação de atualização do ES-MDA com a solução para o problema inverso regularizado. Consequentemente, tais elementos agem como os parâmetros de regularização em cada assimilação. Assim, estudos propuseram técnicas para gerar tais fatores baseadas no princípio da discrepância. Embora estes estudos tenham propostos técnicas, um procedimento ótimo para gerar os fatores de inflação continua um problema em aberto. Mais ainda, tais estudos divergem em qual método de regularização é sufiente para produzir os melhores resultados para o ES-MDA. Portanto, nesta tese é abordado o problema de gerar os fatores de inflação para o ESMDA e suas influências na performance do método. Apresentamos uma análise numérica do impacto de tais fatores nos parâmetros principais do ES-MDA: o tamanho do conjunto, o número de assimilações e o vetor de atualização dos parâmetros. Com a conclusão desta análise, nós propomos uma nova técnica para gerar os fatores de inflação para o ES-MDA baseada em um método de regularização para algorítmos do tipo Levenberg-Marquardt. Investigando os resultados de um problema de inundação de um reservatório 2D, o novo método obtém melhor estimativa tanto para os parâmetros do modelo tanto quanto para os dados observados. / [en] The ensemble smoother with multiple data assimilation (ES-MDA) gained much attention as a powerful parameter estimation method. The main idea of the ES-MDA is to assimilate the same data multiple times with an inflated data error covariance matrix. In the original ES-MDA implementation, these inflation factors, such as the number of assimilations, are selected a priori. The only requirement is that the sum of the inflation factors inverses must be equal to one. Therefore, selecting them equal to the number of assimilations is a straightforward choice. Nevertheless, recent studies have shown a relationship between the ES-MDA update equation and the solution to a regularized inverse problem. Hence, the inflation factors play the role of the regularization parameter at each ES-MDA assimilation step. As a result, they have also suggested new procedures to generate these elements based on the discrepancy principle. Although several studies proposed efficient techniques to generate the ES-MDA inflation factors, an optimal procedure to generate them remains an open problem. Moreover, the studies diverge on which regularization scheme is sufficient to provide the best ES-MDA outcomes. Therefore, in this work, we address the problem of generating the ES-MDA inflation factors and their influence on the method s performance. We present a numerical analysis of the influence of such factors on the main parameters of the ES-MDA, such as the ensemble size, the number of assimilations, and the ES-MDA vector of model parameters update. With the conclusions presented in the aforementioned analysis, we propose a new procedure to generate ES-MDA inflation factors based on a regularizing scheme for Levenberg-Marquardt algorithms. It is shown through a synthetic two-dimensional waterflooding problem that the new method achieves better model parameters and data match compared to the other ES-MDA implementations available in the literature.
7

[en] DIMENSIONLESS ENSEMBLE SMOOTHER WITH MULTIPLE DATA ASSIMILATION APPLIED ON AN INVERSE PROBLEM OF A MULTILAYER RESERVOIR WITH A DAMAGED ZONE / [pt] ENSEMBLE SMOOTER ADIMENSIONAL COM MÚLTIPLA ASSIMILAÇÃO APLICADO A UM PROBLEMA INVERSO DE RESERVATÓRIO MULTICAMADAS COM ZONA DE SKIN

ADAILTON JOSE DO NASCIMENTO SOUSA 05 December 2022 (has links)
[pt] O ES-MDA tem sido usado amplamente no que diz respeito a problemas inversos de reservatórios de petróleo, usando a estatística bayesiana como cerne. Propriedades importantes como a permeabilidade, raio da zona de skin e permeabilidade da zona de skin, são estimadas a partir de dados de histórico de reservatório usando esse método baseado em conjuntos. Nessa tese, a pressão medida no poço durante um teste de injetividade foi calculada usando uma abordagem analítica de um reservatório multicamadas, com zona de skin, usando a Transformada de Laplace. O algoritmo de Stehfest foi usado para inverter os dados para o campo real. Além disso, ao usarmos essa abordagem, conseguimos obter facilmente a vazão em cada camada como um novo dado a ser considerado no ES-MDA, enriquecendo a estimativa dos dados desejados. Por usarmos a vazão e a pressão como dados de entrada no ES-MDA, é de suma importância que a diferença de ordens de grandezas não influencie em nossas estimativas e por isso optou-se por usar o ES-MDA na forma adimensional. Visando uma maior precisão de nossas estimativas, usou-se um algoritmo de otimização dos fatores de inflação do ES-MDA. / [en] The ES-MDA has been extensively used concerning inverse problems of oil reservoirs, using Bayesian statistics as the core. Important properties such as permeability, skin zone radius, and skin zone permeability are estimated from historical reservoir data using this set-based method. In this thesis, the pressure measured at the well during an injectivity test was calculated using an analytical approach of a multilayer reservoir, with skin zone, using the Laplace Transform. Stehfest s algorithm was used to invert the data to the real field. Furthermore, using this approach, we were able to easily obtain the flow rate in each layer as new data to be considered in the ES-MDA, enriching the estimation of the targeted data. As we use flow rate and pressure as input data in the ES-MDA, it is important to assure that the difference in orders of magnitude does not influence our estimates. For this reason, we chose to use the ES-MDA in the dimensionless form. Aiming at a greater precision of our estimates, we used an algorithm to optimize the ES-MDA inflation factors.
8

Analysis of main parameters in adaptive ES-MDA history matching. / Análise dos principais parâmetros no ajuste de histórico utilizando ES-MDA adaptativo.

Ranazzi, Paulo Henrique 06 June 2019 (has links)
In reservoir engineering, history matching is the technique that reviews the uncertain parameters of a reservoir simulation model in order to obtain a response according to the observed production data. Reservoir properties have uncertainties due to their indirect acquisition methods, that results in discrepancies between observed data and reservoir simulator response. A history matching method is the Ensemble Smoother with Multiple Data assimilation (ES-MDA), where an ensemble of models is used to quantify the parameters uncertainties. In ES-MDA, the number of iterations must be defined previously the application by the user, being a determinant parameter for a good quality matching. One way to handle this, is by implementing adaptive methodologies when the algorithm keeps iterating until it reaches good matchings. Also, in large-scale reservoir models it is necessary to apply the localization technique, in order to mitigate spurious correlations and high uncertainty reduction of posterior models. The main objective of this dissertation is to evaluate two main parameters of history matching when using an adaptive ES-MDA: localization and ensemble size, verifying the impact of these parameters in the adaptive scheme. The adaptive ES-MDA used in this work defines the number of iterations and the inflation factors automatically and distance-based Kalman gain localization was used to evaluate the localization influence. The parameters influence was analyzed by applying the methodology in the benchmark UNISIM-I-H: a synthetic large-scale reservoir model based on an offshore Brazilian field. The experiments presented considerable reduction of the objective function for all cases, showing the ability of the adaptive methodology of keep iterating until a desirable overcome is obtained. About the parameters evaluated, a relationship between the localization and the required number of iterations to complete the adaptive algorithm was verified, and this influence has not been observed as function of the ensemble size. / Em engenharia de reservatórios, ajuste de histórico é a técnica que revisa os parâmetros incertos de um modelo de simulação de reservatório para obter uma resposta condizente com os dados de produção observados. As propriedades do reservatório possuem incertezas, devido aos métodos indiretos em que foram adquiridas, resultando em discrepâncias entre os dados observados e a resposta do simulador de reservatório. Um método de ajuste de histórico é o Conjunto Suavizado com Múltiplas Aquisições de Dados (sigla em inglês ES-MDA), onde um conjunto de modelos é utilizado para quantificar as incertezas dos parâmetros. No ES-MDA o número de iterações necessita ser definido previamente pelo usuário antes de sua aplicação, sendo um parâmetro determinante para um ajuste de boa qualidade. Uma forma de contornar esta limitação é implementar metodologias adaptativas onde o algoritmo continue as iterações até que alcance bons ajustes. Por outro lado, em modelos de reservatórios de larga-escala é necessário aplicar alguma técnica de localização para evitar correlações espúrias e uma alta redução de incertezas dos modelos a posteriori. O principal objetivo desta dissertação é avaliar dois principais parâmetros do ajuste de histórico quando aplicado um ES-MDA adaptativo: localização e tamanho do conjunto, verificando o impacto destes parâmetros no método adaptativo. O ES-MDA adaptativo utilizado define o número de iterações e os fatores de inflação automaticamente e a localização no ganho de Kalman baseada na distância foi utilizada para avaliar a influência da localização. Assim, a influência dos parâmetros foi analisada aplicando a metodologia no benchmark UNISIM-I-H: um modelo de reservatório sintético de larga escala baseado em um campo offshore brasileiro. Os experimentos apresentaram considerável redução da função objetivo para todos os casos, mostrando a capacidade da metodologia adaptativa de continuar iterando até que resultados aceitáveis fossem obtidos. Sobre os parâmetros avaliados, foi verificado uma relação entre a localização e o número de iterações necessárias, influência esta que não foi observada em função do tamanho do conjunto.
9

Modelování výnosových křivek a efekt makroekonomických proměnných: Dynamický Nelson-Siegelův přístup / Yield Curve Modeling and the Effect of Macroeconomic Drivers: Dynamic Nelson-Siegel Approach

Patáková, Magdalena January 2012 (has links)
The thesis focuses on the yield curve modeling using the dynamic Nelson-Siegel approach. We propose two models of the yield curve and apply them on four currency areas - USD, EUR, GBP and CZK. At first, we distill the entire yield curve into the time-varying level, slope and curvature factors and estimate the parameters for individual currencies. Subsequently, we build a novel model investigating to what extent unobservable factors of the dynamic Nelson-Siegel model are determined by macroeconomic drivers. The main contribution of this thesis resides in the innovative approach to yield curve modeling with the application of advanced technical tools. Our primary objective was to increase the accuracy and the estimation power of the model. Moreover, we applied both models across different currency areas, which enabled us to compare the dynamics of the yield curves as well as the influence of the macroeconomic drivers. Interestingly, the results proved that both models we developed not only demonstrate strong validity, but also produce powerful estimates across all examined currencies. In addition, the incorporated macroeconomic factors contributed to reach higher precision of the modeling. JEL Classification: C51, C53, G17 Keywords: Nelson-Siegel, Kalman filter, Kalman smoother, Stace space formulation...
10

Sensor Fusion in Smartphones : with Application to Car Racing Performance Analysis / Sesnorfusion i Smartphones : med Tillämpning Inom Bilkörningsanalys

Wallin, Jonas, Zachrisson, Joakim January 2013 (has links)
Today's smartphones are equipped with a variety of different sensors such as GPS receivers, accelerometers, gyroscopes and magnetometers, making smartphones viable tools in many applications. The computational capacity of smartphones allows for software applications running advanced signal processing algorithms. Thus, attaching a smartphone inside a car makes it possible to estimate kinematics of the vehicle by fusing information from the different sensors inside the smartphone. Fusing information from different sources for improving estimation quality is a well-known problem and there exist a lot of methods and algorithms in this area. This thesis approaches the sensor fusion problem of estimating kinematics of cars using smartphones for the purpose of analysing driving performance. Different varieties of the coordinated turn model for describing the vehicle dynamics are investigated. Also, different measurement models are evaluated where bias errors of the sensors are taken into consideration. Pre-filtering and construction of pseudo-measurements are also considered which allow for use of state space models with a lower dimension. / Dagens smartphones är utrustade med en rad olika typer av sensorer såsom GPS mottagare, accelerometrar, gyroskop och magnetometrar vilket medför ett stort användningsområde. Beräkningskapaciteten hos smartphones gör det möjligt för mjukvaruapplikationer att använda sig av avancerade algoritmer för signalbehandling. Det är därför möjligt att placera en smartphone inuti en bil och skatta bilens kinematik genom att kombinera informationen från de olika sensorerna. Att fusionera information från olika källor för att erhålla bättre skattningar är ett välkänt område där det finns många metoder och algoritmer utvecklade. Detta examensarbete behandlar sensorfusionsproblemet att skatta bilars kinematik med hjälp av smartphones för syftet att kunna analysera körprestanda. Olika varianter av en coordinated turn modell för att beskriva bilens dynamik undersöks. Dessutom testas olika modeller för sensorerna där hänsyn till exempelvis biasfel tas. Förbehandling av data och pseudomätningar testas också vilket gör det möjligt att använda tillståndsmodeller med låg dimension.

Page generated in 0.0311 seconds