• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 7
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modelos de regressão beta com erro nas variáveis / Beta regression model with measurement error

Carrasco, Jalmar Manuel Farfan 25 May 2012 (has links)
Neste trabalho de tese propomos um modelo de regressão beta com erros de medida. Esta proposta é uma área inexplorada em modelos não lineares na presença de erros de medição. Abordamos metodologias de estimação, como máxima verossimilhança aproximada, máxima pseudo-verossimilhança aproximada e calibração da regressão. O método de máxima verossimilhança aproximada determina as estimativas maximizando diretamente o logaritmo da função de verossimilhança. O método de máxima pseudo-verossimilhança aproximada é utilizado quando a inferência em um determinado modelo envolve apenas alguns mas não todos os parâmetros. Nesse sentido, dizemos que o modelo apresenta parâmetros de interesse como também de perturbação. Quando substituímos a verdadeira covariável (variável não observada) por uma estimativa da esperança condicional da variável não observada dada a observada, o método é conhecido como calibração da regressão. Comparamos as metodologias de estimação mediante um estudo de simulação de Monte Carlo. Este estudo de simulação evidenciou que os métodos de máxima verossimilhança aproximada e máxima pseudo-verossimilhança aproximada tiveram melhor desempenho frente aos métodos de calibração da regressão e naïve (ingênuo). Utilizamos a linguagem de programação Ox (Doornik, 2011) como suporte computacional. Encontramos a distribuição assintótica dos estimadores, com o objetivo de calcular intervalos de confiança e testar hipóteses, tal como propõem Carroll et. al.(2006, Seção A.6.6), Guolo (2011) e Gong e Samaniego (1981). Ademais, são utilizadas as estatísticas da razão de verossimilhanças e gradiente para testar hipóteses. Num estudo de simulação realizado, avaliamos o desempenho dos testes da razão de verossimilhanças e gradiente. Desenvolvemos técnicas de diagnóstico para o modelo de regressão beta com erros de medida. Propomos o resíduo ponderado padronizado tal como definem Espinheira (2008) com o objetivo de verificar as suposições assumidas ao modelo e detectar pontos aberrantes. Medidas de influência global, tais como a distância de Cook generalizada e o afastamento da verossimilhança, são utilizadas para detectar pontos influentes. Além disso, utilizamos a técnica de influência local conformal sob três esquemas de perturbação (ponderação de casos, perturbação da variável resposta e perturbação da covariável com e sem erros de medida). Aplicamos nossos resultados a dois conjuntos de dados reais para exemplificar a teoria desenvolvida. Finalmente, apresentamos algumas conclusões e possíveis trabalhos futuros. / In this thesis, we propose a beta regression model with measurement error. Among nonlinear models with measurement error, such a model has not been studied extensively. Here, we discuss estimation methods such as maximum likelihood, pseudo-maximum likelihood, and regression calibration methods. The maximum likelihood method estimates parameters by directly maximizing the logarithm of the likelihood function. The pseudo-maximum likelihood method is used when the inference in a given model involves only some but not all parameters. Hence, we say that the model under study presents parameters of interest, as well as nuisance parameters. When we replace the true covariate (observed variable) with conditional estimates of the unobserved variable given the observed variable, the method is known as regression calibration. We compare the aforementioned estimation methods through a Monte Carlo simulation study. This simulation study shows that maximum likelihood and pseudo-maximum likelihood methods perform better than the calibration regression method and the naïve approach. We use the programming language Ox (Doornik, 2011) as a computational tool. We calculate the asymptotic distribution of estimators in order to calculate confidence intervals and test hypotheses, as proposed by Carroll et. al (2006, Section A.6.6), Guolo (2011) and Gong and Samaniego (1981). Moreover, we use the likelihood ratio and gradient statistics to test hypotheses. We carry out a simulation study to evaluate the performance of the likelihood ratio and gradient tests. We develop diagnostic tests for the beta regression model with measurement error. We propose weighted standardized residuals as defined by Espinheira (2008) to verify the assumptions made for the model and to detect outliers. The measures of global influence, such as the generalized Cook\'s distance and likelihood distance, are used to detect influential points. In addition, we use the conformal approach for evaluating local influence for three perturbation schemes: case-weight perturbation, respose variable perturbation, and perturbation in the covariate with and without measurement error. We apply our results to two sets of real data to illustrate the theory developed. Finally, we present our conclusions and possible future work.
22

Three essays on the econometric analysis of high-frequency data

Malec, Peter 27 June 2013 (has links)
Diese Dissertation behandelt die ökonometrische Analyse von hochfrequenten Finanzmarktdaten. Kapitel 1 stellt einen neuen Ansatz zur Modellierung von seriell abhängigen positiven Variablen, die einen nichttrivialen Anteil an Nullwerten aufweisen, vor. Letzteres ist ein weitverbreitetes Phänomen in hochfrequenten Finanzmarktzeitreihen. Eingeführt wird eine flexible Punktmassenmischverteilung, ein maßgeschneiderter semiparametrischer Spezifikationstest sowie eine neue Art von multiplikativem Fehlermodell (MEM). Kapitel 2 beschäftigt sich mit dem Umstand, dass feste symmetrische Kerndichteschätzer eine geringe Präzision aufweisen, falls eine positive Zufallsvariable mit erheblicher Wahrscheinlichkeitsmasse nahe Null gegeben ist. Wir legen dar, dass Gammakernschätzer überlegen sind, wobei ihre relative Präzision von der genauen Form der Dichte sowie des Kerns abhängt. Wir führen einen verbesserten Gammakernschätzer sowie eine datengetriebene Methodik für die Wahl des geeigneten Typs von Gammakern ein. Kapitel 3 wendet sich der Frage nach dem Nutzen von Hochfrequenzdaten für hochdimensionale Portfolioallokationsanwendungen zu. Wir betrachten das Problem der Konstruktion von globalen Minimum-Varianz-Portfolios auf der Grundlage der Konstituenten des S&P 500. Wir zeigen auf, dass Prognosen, welche auf Hochfrequenzdaten basieren, im Vergleich zu Methoden, die tägliche Renditen verwenden, eine signifikant geringere Portfoliovolatilität implizieren. Letzteres geht mit spürbaren Nutzengewinnen aus der Sicht eines Investors mit hoher Risikoaversion einher. / In three essays, this thesis deals with the econometric analysis of financial market data sampled at intraday frequencies. Chapter 1 presents a novel approach to model serially dependent positive-valued variables realizing a nontrivial proportion of zero outcomes. This is a typical phenomenon in financial high-frequency time series. We introduce a flexible point-mass mixture distribution, a tailor-made semiparametric specification test and a new type of multiplicative error model (MEM). Chapter 2 addresses the problem that fixed symmetric kernel density estimators exhibit low precision for positive-valued variables with a large probability mass near zero, which is common in high-frequency data. We show that gamma kernel estimators are superior, while their relative performance depends on the specific density and kernel shape. We suggest a refined gamma kernel and a data-driven method for choosing the appropriate type of gamma kernel estimator. Chapter 3 turns to the debate about the merits of high-frequency data in large-scale portfolio allocation. We consider the problem of constructing global minimum variance portfolios based on the constituents of the S&P 500. We show that forecasts based on high-frequency data can yield a significantly lower portfolio volatility than approaches using daily returns, implying noticeable utility gains for a risk-averse investor.
23

Modelos de regressão beta com erro nas variáveis / Beta regression model with measurement error

Jalmar Manuel Farfan Carrasco 25 May 2012 (has links)
Neste trabalho de tese propomos um modelo de regressão beta com erros de medida. Esta proposta é uma área inexplorada em modelos não lineares na presença de erros de medição. Abordamos metodologias de estimação, como máxima verossimilhança aproximada, máxima pseudo-verossimilhança aproximada e calibração da regressão. O método de máxima verossimilhança aproximada determina as estimativas maximizando diretamente o logaritmo da função de verossimilhança. O método de máxima pseudo-verossimilhança aproximada é utilizado quando a inferência em um determinado modelo envolve apenas alguns mas não todos os parâmetros. Nesse sentido, dizemos que o modelo apresenta parâmetros de interesse como também de perturbação. Quando substituímos a verdadeira covariável (variável não observada) por uma estimativa da esperança condicional da variável não observada dada a observada, o método é conhecido como calibração da regressão. Comparamos as metodologias de estimação mediante um estudo de simulação de Monte Carlo. Este estudo de simulação evidenciou que os métodos de máxima verossimilhança aproximada e máxima pseudo-verossimilhança aproximada tiveram melhor desempenho frente aos métodos de calibração da regressão e naïve (ingênuo). Utilizamos a linguagem de programação Ox (Doornik, 2011) como suporte computacional. Encontramos a distribuição assintótica dos estimadores, com o objetivo de calcular intervalos de confiança e testar hipóteses, tal como propõem Carroll et. al.(2006, Seção A.6.6), Guolo (2011) e Gong e Samaniego (1981). Ademais, são utilizadas as estatísticas da razão de verossimilhanças e gradiente para testar hipóteses. Num estudo de simulação realizado, avaliamos o desempenho dos testes da razão de verossimilhanças e gradiente. Desenvolvemos técnicas de diagnóstico para o modelo de regressão beta com erros de medida. Propomos o resíduo ponderado padronizado tal como definem Espinheira (2008) com o objetivo de verificar as suposições assumidas ao modelo e detectar pontos aberrantes. Medidas de influência global, tais como a distância de Cook generalizada e o afastamento da verossimilhança, são utilizadas para detectar pontos influentes. Além disso, utilizamos a técnica de influência local conformal sob três esquemas de perturbação (ponderação de casos, perturbação da variável resposta e perturbação da covariável com e sem erros de medida). Aplicamos nossos resultados a dois conjuntos de dados reais para exemplificar a teoria desenvolvida. Finalmente, apresentamos algumas conclusões e possíveis trabalhos futuros. / In this thesis, we propose a beta regression model with measurement error. Among nonlinear models with measurement error, such a model has not been studied extensively. Here, we discuss estimation methods such as maximum likelihood, pseudo-maximum likelihood, and regression calibration methods. The maximum likelihood method estimates parameters by directly maximizing the logarithm of the likelihood function. The pseudo-maximum likelihood method is used when the inference in a given model involves only some but not all parameters. Hence, we say that the model under study presents parameters of interest, as well as nuisance parameters. When we replace the true covariate (observed variable) with conditional estimates of the unobserved variable given the observed variable, the method is known as regression calibration. We compare the aforementioned estimation methods through a Monte Carlo simulation study. This simulation study shows that maximum likelihood and pseudo-maximum likelihood methods perform better than the calibration regression method and the naïve approach. We use the programming language Ox (Doornik, 2011) as a computational tool. We calculate the asymptotic distribution of estimators in order to calculate confidence intervals and test hypotheses, as proposed by Carroll et. al (2006, Section A.6.6), Guolo (2011) and Gong and Samaniego (1981). Moreover, we use the likelihood ratio and gradient statistics to test hypotheses. We carry out a simulation study to evaluate the performance of the likelihood ratio and gradient tests. We develop diagnostic tests for the beta regression model with measurement error. We propose weighted standardized residuals as defined by Espinheira (2008) to verify the assumptions made for the model and to detect outliers. The measures of global influence, such as the generalized Cook\'s distance and likelihood distance, are used to detect influential points. In addition, we use the conformal approach for evaluating local influence for three perturbation schemes: case-weight perturbation, respose variable perturbation, and perturbation in the covariate with and without measurement error. We apply our results to two sets of real data to illustrate the theory developed. Finally, we present our conclusions and possible future work.
24

Election Administration within the Sphere of Politics: How Bureaucracy Can Facilitate Democracy with Policy Decisions

Martinez, Nicholas S 29 May 2018 (has links)
Public bureaucracy finds itself in a strange place at the intersection of political science and public administration. Political science finds that, within representative democracy, discretion granted to bureaucrats threatens the nature of democracy by subverting politicians who represent the will of the people – bureaucracy vs democracy. At the same time, public administration holds that, in the interest of promoting democracy, bureaucracy should be objective in its implementation of policy in a way that eliminates the influence of politics from decision-making – politics vs bureaucracy. Those positions are seemingly contradictory in nature. From one perspective, bureaucracy is undemocratic because it is outside of politics, yet an overreach of politics into the bureaucracy yields undemocratic outcomes. Bureaucracy can facilitate democracy outside of politics. This study looks to empirically test whether local bureaucrats, who should be willing to act in-line with influential co-partisans, might still promote democratic outcomes for their constituents with their discretionary decision-making. Florida provides an empirical backdrop for testing bureaucracy’s impact on democracy with a natural experimental scenario created with the passing of new early voting limitations in 2011. Florida’s Republican (R) lawmakers passed House Bill 1355 (HB 1355), which was signed into law by Governor Scott (R), that dramatically limited the early voting days allowed for federal elections. HB 1355 changed the early voting (EV) period from fourteen (14) days to eight (8) days and eliminated the last Sunday before Election Day as well. The move was widely seen as a political calculation aimed at stifling the participation of Democrats in the 2012 General Election. In seeming lockstep, local Supervisors of Elections (SOEs) from both parties utilized their statutory discretion over the location of early voting sites to alter the distribution of sites before the 2012 General Election. I find that Republican SOEs did not distribute early voting locations in a way that negatively impacted early voting participation rates (EVPR) for their local precincts. Furthermore, I find that, all else equal, their decisions did not statistically impact EVPR differently than the EVPR in communities managed by Democrats. Republican SOEs did not add new costs to voters in their communities. I provide new evidence that demonstrates that bureaucrats can indeed limit the impact of undue politics from their influential co-partisans to promote more democratic outcomes.
25

Misturas de modelos de regressão linear com erros nas variáveis usando misturas de escala da normal assimétrica

Monteiro, Renata Evangelista, 92-99124-4468 12 March 2018 (has links)
Submitted by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-05-29T14:38:33Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) VersaoFinal.pdf: 2882901 bytes, checksum: a35c6d27fe0f9aa61dfe3a96244b3140 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-05-29T14:38:46Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) VersaoFinal.pdf: 2882901 bytes, checksum: a35c6d27fe0f9aa61dfe3a96244b3140 (MD5) / Made available in DSpace on 2018-05-29T14:38:46Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) VersaoFinal.pdf: 2882901 bytes, checksum: a35c6d27fe0f9aa61dfe3a96244b3140 (MD5) Previous issue date: 2018-03-12 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The traditional estimation of mixture regression models is based on the assumption of normality of component errors and thus is sensitive to outliers, heavy-tailed and/or asymmetric errors. Another drawback is that, in general, the analysis is restricted to directly observed predictors. We present a proposal to deal with these issues simultaneously in the context of mixture regression by extending the classic normal model by assuming that, for each mixture component, the random errors and the covariates jointly follow a scale mixture of skew-normal distributions. It is also assumed that the covariates are observed with error. An MCMC-type algorithm to perform Bayesian inference is developed and, in order to show the efficacy of the proposed methods, simulated and real data sets are analyzed. / A estimação tradicional em mistura de modelos de regressão é baseada na suposição de normalidade para os erros aleatórios, sendo assim, sensível a outliers, caudas pesadas e erros assimétricos. Outra desvantagem é que, em geral, a análise é restrita a preditores que são observados diretamente. Apresentamos uma proposta para lidar com estas questões simultaneamente no contexto de mistura de regressões estendendo o modelo normal clássico. Assumimos que, conjuntamente e em cada componente da mistura, os erros aleatórios e as covariáveis seguem uma mistura de escala da distribuição normal assimétrica. Além disso, é feita a suposição de que as covariáveis são observadas com erro aditivo. Um algorítmo do tipo MCMC foi desenvolvido para realizar inferência Bayesiana. A eficácia do modelo proposto é verificada via análises de dados simulados e reais.
26

Essays in Spatial Econometrics: Estimation, Specification Test and the Bootstrap

Jin, Fei 09 August 2013 (has links)
No description available.
27

Rubidium Oscillator Error Model for Specific Force and Magnetic Field Susceptibility

Craig, Samantha L. 09 June 2014 (has links)
No description available.
28

Sur les tests de type diagnostic dans la validation des hypothèses de bruit blanc et de non corrélation

Sango, Joel 09 1900 (has links)
Dans la modélisation statistique, nous sommes le plus souvent amené à supposer que le phénomène étudié est généré par une structure pouvant s’ajuster aux données observées. Cette structure fait apparaître une partie principale qui représente le mieux possible le phénomène étudié et qui devrait expliquer les données et une partie supposée négligeable appelée erreur ou innovation. Cette structure complexe est communément appelée un modèle, dont la forme peut être plus ou moins complexe. Afin de simplifier la structure, il est souvent supposé qu’elle repose sur un nombre fini de valeurs, appelées paramètres. Basé sur les données, ces paramètres sont estimés avec ce que l’on appelle des estimateurs. La qualité du modèle pour les données à notre disposition est également fonction des estimateurs et de leurs propriétés, par exemple, est-ce que les estimateurs sont raisonnablement proches des valeurs idéales, c’est-à-dire les vraies valeurs. Des questions d’importance portent sur la qualité de l’ajustement d’un modèle aux données, ce qui se fait par l’étude des propriétés probabilistes et statistiques du terme d’erreur. Aussi, l’étude des relations ou l’absence de ces dernières entre les phénomènes sous des hypothèses complexes sont aussi d’intérêt. Des approches possibles pour cerner ce genre de questions consistent dans l’utilisation des tests portemanteaux, dits également tests de diagnostic. La thèse est présentée sous forme de trois projets. Le premier projet est rédigé en langue anglaise. Il s’agit en fait d’un article actuellement soumis dans une revue avec comité de lecture. Dans ce projet, nous étudions le modèle vectoriel à erreurs multiplicatives (vMEM) pour lequel nous utilisons les propriétés des estimateurs des paramètres du modèle selon la méthode des moments généralisés (GMM) afin d’établir la distribution asymptotique des autocovariances résiduelles. Ceci nous permet de proposer des nouveaux tests diagnostiques pour ce type de modèle. Sous l’hypothèse nulle d’adéquation du modèle, nous montrons que la statistique usuelle de Hosking-Ljung-Box converge vers une somme pondérée de lois de khi-carré indépendantes à un degré de liberté. Un test généralisé de Hosking-Ljung-Box est aussi obtenu en comparant la densité spectrale des résidus de l’estimation et celle présumée sous l’hypothèse nulle. Un avantage des tests spectraux est qu’ils nécessitent des estimateurs qui convergent à la vitesse n−1/2 où n est la taille de l’échantillon, et leur utilisation n’est pas restreinte à une technique particulière, comme par exemple la méthode des moments généralisés. Dans le deuxième projet, nous établissons la distribution asymptotique sous l’hypothèse de faible dépendance des covariances croisées de deux processus stationnaires en covariance. La faible dépendance ici est définie en terme de l’effet limité d’une observation donnée sur les observations futures. Nous utilisons la notion de stabilité et le concept de contraction géométrique des moments. Ces conditions sont plus générales que celles de l’invariance des moments conditionnels d’ordre un à quatre utilisée jusque là par plusieurs auteurs. Un test statistique basé sur les covariances croisées et la matrice des variances et covariances de leur distribution asymptotique est alors proposé et sa distribution asymptotique établie. Dans l’implémentation du test, la matrice des variances et covariances des covariances croisées est estimée à l’aide d’une procédure autorégressive vectorielle robuste à l’autocorrélation et à l’hétéroscédasticité. Des simulations sont ensuite effectuées pour étudier les propriétés du test proposé. Dans le troisième projet, nous considérons un modèle périodique multivarié et cointégré. La présence de cointégration entraîne l’existence de combinaisons linéaires périodiquement stationnaires des composantes du processus étudié. Le nombre de ces combinaisons linéaires linéairement indépendantes est appelé rang de cointégration. Une méthode d’estimation en deux étapes est considérée. La première méthode est appelée estimation de plein rang. Dans cette approche, le rang de cointégration est ignoré. La seconde méthode est appelée estimation de rang réduit. Elle tient compte du rang de cointégration. Cette dernière est une approche non linéaire basée sur des itérations dont la valeur initiale est l’estimateur de plein rang. Les propriétés asymptotiques de ces estimateurs sont aussi établies. Afin de vérifier l’adéquation du modèle, des statistiques de test de type portemanteau sont considérées et leurs distributions asymptotiques sont étudiées. Des simulations sont par la suite présentées afin d’illustrer le comportement du test proposé. / In statistical modeling, we assume that the phenomenon of interest is generated by a model that can be fitted to the observed data. The part of the phenomenon not explained by the model is called error or innovation. There are two parts in the model. The main part is supposed to explain the observed data, while the unexplained part which is supposed to be negligible is also called error or innovation. In order to simplify the structures, the model are often assumed to rely on a finite set of parameters. The quality of a model depends also on the parameter estimators and their properties. For example, are the estimators relatively close to the true parameters ? Some questions also address the goodness-of-fit of the model to the observed data. This question is answered by studying the statistical and probabilistic properties of the innovations. On the other hand, it is also of interest to evaluate the presence or the absence of relationships between the observed data. Portmanteau or diagnostic type tests are useful to address such issue. The thesis is presented in the form of three projects. The first project is written in English as a scientific paper. It was recently submitted for publication. In that project, we study the class of vector multiplicative error models (vMEM). We use the properties of the Generalized Method of Moments to derive the asymptotic distribution of sample autocovariance function. This allows us to propose a new test statistic. Under the null hypothesis of adequacy, the asymptotic distributions of the popular Hosking-Ljung-Box (HLB) test statistics are found to converge in distribution to weighted sums of independent chi-squared random variables. A generalized HLB test statistic is motivated by comparing a vector spectral density estimator of the residuals with the spectral density calculated under the null hypothesis. In the second project, we derive the asymptotic distribution under weak dependence of cross covariances of covariance stationary processes. The weak dependence is defined in term of the limited effect of a given information on future observations. This recalls the notion of stability and geometric moment contraction. These conditions of weak dependence defined here are more general than the invariance of conditional moments used by many authors. A test statistic based on cross covariances is proposed and its asymptotic distribution is established. In the elaboration of the test statistics, the covariance matrix of the cross covariances is obtained from a vector autoregressive procedure robust to autocorrelation and heteroskedasticity. Simulations are also carried on to study the properties of the proposed test and also to compare it to existing tests. In the third project, we consider a cointegrated periodic model. Periodic models are present in the domain of meteorology, hydrology and economics. When modelling many processes, it can happen that the processes are just driven by a common trend. This situation leads to spurious regressions when the series are integrated but have some linear combinations that are stationary. This is called cointegration. The number of stationary linear combinations that are linearly independent is called cointegration rank. So, to model the real relationship between the processes, it is necessary to take into account the cointegration rank. In the presence of periodic time series, it is called periodic cointegration. It occurs when time series are periodically integrated but have some linear combinations that are periodically stationary. A two step estimation method is considered. The first step is the full rank estimation method that ignores the cointegration rank. It provides initial estimators to the second step estimation which is the reduced rank estimation. It is non linear and iterative. Asymptotic properties of the estimators are also established. In order to check for model adequacy, portmanteau type tests and their asymptotic distributions are also derived and their asymptotic distribution are studied. Simulation results are also presented to show the behaviour of the proposed test.
29

Optimální odhad stavu modelu navigačního systému / Optimal state estimation of a navigation model system

Papež, Milan January 2013 (has links)
This thesis presents an investigation of the possibility of using the fixed-point arithmetic in the inertial navigation systems, which use the local level navigation frame mechanization equations. Two square root filtering methods, the Potter's square root Kalman filter and UD factorized Kalman filter, are compared with respect to the conventional Kalman filter and its Joseph's stabilized form. The effect of rounding errors to the Kalman filter optimality and the covariance matrix or its factors conditioning is evaluated for a various lengths of the fractional part of the fixed-point computational word. Main contribution of this research lies in an evaluation of the minimal fixed-point arithmetic word length for the Phi-angle error model with noise statistics which correspond to the tactical grade inertial measurements units.

Page generated in 0.0772 seconds