• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 11
  • 7
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 21
  • 13
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Filtragem robusta recursiva para sistemas lineares a tempo discreto com parâmetros sujeitos a saltos Markovianos / Recursive robust filtering for discrete-time Markovian jump linear systems

Gildson Queiroz de Jesus 26 August 2011 (has links)
Este trabalho trata de filtragem robusta para sistemas lineares sujeitos a saltos Markovianos discretos no tempo. Serão desenvolvidas estimativas preditoras e filtradas baseadas em algoritmos recursivos que são úteis para aplicações em tempo real. Serão desenvolvidas duas classes de filtros robustos, uma baseada em uma estratégia do tipo H \'INFINITO\' e a outra baseada no método dos mínimos quadrados regularizados robustos. Além disso, serão desenvolvidos filtros na forma de informação e seus respectivos algoritmos array para estimar esse tipo de sistema. Neste trabalho assume-se que os parâmetros de saltos do sistema Markoviano não são acessíveis. / This work deals with the problem of robust state estimation for discrete-time uncertain linear systems subject to Markovian jumps. Predicted and filtered estimates are developed based on recursive algorithms which are useful in on-line applications. We develop two classes of filters, the first one is based on a H \'INFINITO\' approach and the second one is based on a robust regularized leastsquare method. Moreover, we develop information filter and their respective array algorithms to estimate this kind of system. We assume that the jump parameters of the Markovian system are not acessible.
62

An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation.

Lin, TsungPo 26 June 2008 (has links)
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principle component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
63

Robust estimation for spatial models and the skill test for disease diagnosis

Lin, Shu-Chuan 25 August 2008 (has links)
This thesis focuses on (1) the statistical methodologies for the estimation of spatial data with outliers and (2) classification accuracy of disease diagnosis. Chapter I, Robust Estimation for Spatial Markov Random Field Models: Markov Random Field (MRF) models are useful in analyzing spatial lattice data collected from semiconductor device fabrication and printed circuit board manufacturing processes or agricultural field trials. When outliers are present in the data, classical parameter estimation techniques (e.g., least squares) can be inefficient and potentially mislead the analyst. This chapter extends the MRF model to accommodate outliers and proposes robust parameter estimation methods such as the robust M- and RA-estimates. Asymptotic distributions of the estimates with differentiable and non-differentiable robustifying function are derived. Extensive simulation studies explore robustness properties of the proposed methods in situations with various amounts of outliers in different patterns. Also provided are studies of analysis of grid data with and without the edge information. Three data sets taken from the literature illustrate advantages of the methods. Chapter II, Extending the Skill Test for Disease Diagnosis: For diagnostic tests, we present an extension to the skill plot introduced by Mozer and Briggs (2003). The method is motivated by diagnostic measures for osteoporosis in a study. By restricting the area under the ROC curve (AUC) according to the skill statistic, we have an improved diagnostic test for practical applications by considering the misclassification costs. We also construct relationships, using the Koziol-Green model and mean-shift model, between the diseased group and the healthy group for improving the skill statistic. Asymptotic properties of the skill statistic are provided. Simulation studies compare the theoretical results and the estimates under various disease rates and misclassification costs. We apply the proposed method in classification of osteoporosis data.
64

[en] COMBINING STRATEGIES FOR ESTIMATION OF TREATMENT EFFECTS / [pt] COMBINANDO ESTRATÉGIAS PARA ESTIMAÇÃO DE EFEITOS DE TRATAMENTO

RAFAEL DE CARVALHO CAYRES PINTO 19 January 2018 (has links)
[pt] Uma ferramenta importante na avaliação de políticas econômicas é a estimação do efeito médio de um programa ou tratamento sobre uma variável de interesse. A principal dificuldade desse cálculo deve-se µa atribuição do tratamento aos potenciais participantes geralmente não ser aleatória, causando viés de seleção quando desconsiderada. Uma maneira de resolver esse problema é supor que o econometrista observa um conjunto de características determinantes, a menos de um componente estritamente aleatório, da participação. Sob esta hipótese, conhecida como Ignorabilidade, métodos semiparamétricos de estimação foram desenvolvidos, entre os quais a imputação de valores contrafactuais e a reponderação da amostra. Ambos são consistentes e capazes de atingir, assintoticamente, o limite de eficiência semiparamétrico. Entretanto, nas amostras frequentemente disponíveis, o desempenho desses métodos nem sempre é satisfatório. O objetivo deste trabalho é estudar como a combinação das duas estratégias pode produzir estimadores com melhores propriedades em amostras pequenas. Para isto, consideramos duas formas de integrar essas abordagens, tendo como referencial teórico a literatura de estimação duplamente robusta desenvolvida por James Robins e co-autores. Analisamos suas propriedades e discutimos por que podem superar o uso isolado de cada uma das técnicas que os compõem. Finalmente, comparamos, num exercício de Monte Carlo, o desempenho desses estimadores com os de imputação e reponderação. Os resultados mostram que a combinação de estratégias pode reduzir o viés e a variância, mas isso depende da forma como é implementada. Concluímos que a escolha dos parâmetros de suavização é decisiva para o desempenho da estimação em amostras de tamanho moderado. / [en] Estimation of mean treatment effect is an important tool for evaluating economic policy. The main difficulty in this calculation is caused by nonrandom assignment of potential participants to treatment, which leads to selection bias when ignored. A solution to this problem is to suppose the econometrician observes a set of covariates that determine participation, except for a strictly random component. Under this assumption, known as Ignorability, semiparametric methods were developed, including imputation of counterfactual outcomes and sample reweighing. Both are consistent and can asymptotically achieve the semiparametric efficiency bound. However, in sample sizes commonly available, their performance is not always satisfactory. The goal of this dissertation is to study how combining these strategies can lead to better estimation in small samples. We consider two different ways of merging these methods, based on Doubly Robust inference literature developed by James Robins and his co-authors, analyze their properties and discuss why they would overcome each of their components. Finally, we compare the proposed estimators to imputation and reweighing in a Monte Carlo exercise. Results show that while combined strategies may reduce bias and variance, it depends on the way it is implemented. We conclude that the choice of smoothness parameters is critical to obtain good estimates in moderate size samples.
65

Tolérance aux Défaillances par Capteurs Virtuels : application aux Systèmes de Régulation d'un Turboréacteur / Virtual Sensors for Fault-Tolerant System : application to a Jet Engine Control Systems

Souami, Yani 16 July 2015 (has links)
L'industrie aéronautique évolue dans un contexte concurrentiel qui encourage les motoristes et avionneurs à réduire les coûts de production et à améliorer leurs services aux compagnies aériennes tels que la réduction des coûts d'exploitation et de maintenances des avions. Afin de relever ce défi économique, nous proposons dans cette thèse de remplacer l'architecture de régulation actuelle de certains équipements du turboréacteur, par une architecture simplifiée plus économe en capteurs et harnais en remplaçant la redondance matérielle des capteurs par une redondance analytique. Ainsi, en cas de fonctionnement anormal, les capteurs virtuels proposés pourront être utilisés pour consolider la prise de décision sur l'état du capteur par des tests de cohérence et de validation croisée et le cas échéant se substituer aux mesures.Dans ce travail de thèse, on s'est intéressé à la surveillance des systèmes de régulation de géométries variables (régulation du flux d'air en entrée et la quantité de carburant) avec comme contrainte forte la non-modification des paramètres des lois de commande existantes et le maintien de l'opérabilité du turboréacteur avec une dégradation des performances acceptables selon les spécifications du cahier des charges.Pour répondre à ces contraintes opérationnelles, une approche FTC (Fault Tolerant Control) passive est proposée. Cette approche nommée, AVG-FTC (Aircraft Variables Geometries-Fault-Tolerant Control) s'articule autour de plusieurs sous-systèmes mis en cascades. Elle tient compte du caractère instationnaire des systèmes étudiés, des différents couplages entre géométries variables et des incertitudes de modélisation. Ainsi, l'approche utilise un modèle neuronal du capteur couplé à un observateur de type Takagi-Sugeno-LPV (Linéaire à Paramètres Variant) et à un estimateur non linéaire robuste de type NEKF (Filtre de Kalman Étendu Neuronal) qui permet de produire une estimation temps réel des grandeurs surveillées. En utilisant la plateforme de prototypage et de tests du motoriste, nous avons pu évaluer l'approche AVG-FTC en simulant plusieurs scénarios de vol en présence de défaillances. Ceci a permis de montrer les performances de l'approche en termes de robustesse, de garantie de stabilité des boucles de régulations et d'opérabilité du turboréacteur. To improve the availability, a solution that aircraft manufacturers and suppliers adopt was the fault tolerance. / Over the years, market pressure has ensured that engine manufacturers invest in technology to provide clean, quiet, affordable, reliable, and efficient power. One of the last improvements is the introduction of virtual sensors that make use of non-like signals (analytical redundancy). This, is expected to improve weight, flight safety and availability. However, this new approach has not been widely investigated yet and needs further attention to remove its limitations for certificated applications.The concept of virtual sensors goes along with fault tolerance control strategies that help in limiting disruptions and maintenance costs. Indeed, a fault-tolerant control (FTC) scheme, allows for a leaner hardware structure without decreasing the safety of the system.We propose in this thesis work, to monitor through a passive FTC architecture, the Variables Geometries subsystems' of the engine: the VSV (Variable Stator Vane) and FMV (Fuel Metering Valve). A strong constrains, is not to change the parameters of the existing controllers. The approach named AVG-FTC (Variable Geometries Aircraft-Fault-Tolerant Control) is based on several cascaded sub-systems that allow to deal with the Linear Parameter Varying (LPV) model of the systems and modelling errors. The proposed FTC scheme uses a neural model of the sensor associated with a Takagi-Sugeno observer and a Neuronal Extended Kalman Filter Neural (NEKF) to account for those dynamics that cannot be explained with the LPV model to produce a real-time estimate of the monitored outputs. In case of sensor abnormality, the proposed virtual sensors can then be used as an arbitrator for sensor monitoring or as a healthy sensor used by the controller. To evaluate the approach, serval closed-loop simulations, on SNECMA jet-engine simulator have been performed. The results for distinct flight scenarios with different sensors faults have shown the capabilities of the approach in terms of stability and robustness.
66

Algorithmes d’estimation et de détection en contexte hétérogène rang faible / Estimation and Detection Algorithms for Low Rank Heterogeneous Context

Breloy, Arnaud 23 November 2015 (has links)
Une des finalités du traitement d’antenne est la détection et la localisation de cibles en milieu bruité. Dans la plupart des cas pratiques, comme par exemple le RADAR ou le SONAR actif, il faut estimer dans un premier temps les propriétés statistiques du bruit, et plus précisément sa matrice de covariance ; on dispose à cette fin de données secondaires supposées identiquement distribuées. Dans ce contexte, les hypothèses suivantes sont généralement formulées : bruit gaussien, données secondaires ne contenant que du bruit, et bien sûr matériels fonctionnant parfaitement. Il est toutefois connu aujourd’hui que le bruit en RADAR est de nature impulsive et que l’hypothèse Gaussienne est parfois mal adaptée. C’est pourquoi, depuis quelques années, le bruit et en particulier le fouillis de sol est modélisé par des processus elliptiques, et principalement des Spherically Invariant Random Vectors (SIRV). Dans ce nouveau cadre, la Sample Covariance Matrix (SCM) estimant classiquement la matrice de covariance du bruit entraîne des pertes de performances très importantes des détecteurs / estimateurs. Dans ce contexte non-gaussien, d’autres estimateurs de la matrice de covariance mieux adaptés à cette statistique du bruit ont été développés : la Matrice du Point Fixe (MPF) et les M-estimateurs.Parallèlement, dans un cadre où le bruit se décompose sous la forme d’une somme d’un fouillis rang faible et d’un bruit blanc, la matrice de covariance totale est structurée sous la forme rang faible plus identité. Cette information peut être utilisée dans le processus d'estimation afin de réduire le nombre de données nécessaires. De plus, il aussi est possible d'utiliser le projecteur orthogonal au sous espace fouillis à la place de la matrice de covariance ce qui nécessite moins de données secondaires et d’être aussi plus robuste aux données aberrantes. On calcule classiquement ce projecteur à partir d'un estimateur de la matrice de covariance. Néanmoins l'état de l'art ne présente pas d'estimateurs à la fois être robustes aux distributions hétérogènes, et rendant compte de la structure rang faible des données. C'est pourquoi ces travaux se focalisent sur le développement de nouveaux estimateurs (de covariance et de sous espace), directement adaptés au contexte considéré. Les contributions de cette thèse s'orientent donc autour de trois axes :- Nous présenterons tout d'abord un modèle statistique précis : celui de sources hétérogènes ayant une covariance rang faible noyées dans un bruit blanc gaussien. Ce modèle et est, par exemple, fortement justifié pour des applications de type radar. Il à cependant peu été étudié pour la problématique d'estimation de matrice de covariance. Nous dériverons donc l'expression du maximum de vraisemblance de la matrice de covariance pour ce contexte. Cette expression n'étant pas une forme close, nous développerons différents algorithmes pour tenter de l'atteindre efficacement.- Nous développons de nouveaux estimateurs directs de projecteur sur le sous espace fouillis, ne nécessitant pas un estimé de la matrice de covariance intermédiaire, adaptés au contexte considéré.- Nous étudierons les performances des estimateurs proposés et de l'état de l'art sur une application de Space Time Adaptative Processing (STAP) pour radar aéroporté, au travers de simulations et de données réelles. / One purpose of array processing is the detection and location of a target in a noisy environment. In most cases (as RADAR or active SONAR), statistical properties of the noise, especially its covariance matrix, have to be estimated using i.i.d. samples. Within this context, several hypotheses are usually made: Gaussian distribution, training data containing only noise, perfect hardware. Nevertheless, it is well known that a Gaussian distribution doesn’t provide a good empirical fit to RADAR clutter data. That’s why noise is now modeled by elliptical process, mainly Spherically Invariant Random Vectors (SIRV). In this new context, the use of the SCM (Sample Covariance Matrix), a classical estimate of the covariance matrix, leads to a loss of performances of detectors/estimators. More efficient estimators have been developed, such as the Fixed Point Estimator and M-estimators.If the noise is modeled as a low-rank clutter plus white Gaussian noise, the total covariance matrix is structured as low rank plus identity. This information can be used in the estimation process to reduce the number of samples required to reach acceptable performance. Moreover, it is possible to estimate the basis vectors of the clutter-plus-noise orthogonal subspace rather than the total covariance matrix of the clutter, which requires less data and is more robust to outliers. The orthogonal projection to the clutter plus noise subspace is usually calculated from an estimatd of the covariance matrix. Nevertheless, the state of art does not provide estimators that are both robust to various distributions and low rank structured.In this Thesis, we therefore develop new estimators that are fitting the considered context, to fill this gap. The contributions are following three axes :- We present a precise statistical model : low rank heterogeneous sources embedded in a white Gaussian noise.We express the maximum likelihood estimator for this context.Since this estimator has no closed form, we develop several algorithms to reach it effitiently.- For the considered context, we develop direct clutter subspace estimators that are not requiring an intermediate Covariance Matrix estimate.- We study the performances of the proposed methods on a Space Time Adaptive Processing for airborne radar application. Tests are performed on both synthetic and real data.
67

Adaptive Estimation using Gaussian Mixtures

Pfeifer, Tim 25 October 2023 (has links)
This thesis offers a probabilistic solution to robust estimation using a novel adaptive estimator. Reliable state estimation is a mandatory prerequisite for autonomous systems interacting with the real world. The presence of outliers challenges the Gaussian assumption of numerous estimation algorithms, resulting in a potentially skewed estimate that compromises reliability. Many approaches attempt to mitigate erroneous measurements by using a robust loss function – which often comes with a trade-off between robustness and numerical stability. The proposed approach is purely probabilistic and enables adaptive large-scale estimation with non-Gaussian error models. The introduced Adaptive Mixture algorithm combines a nonlinear least squares backend with Gaussian mixtures as the measurement error model. Factor graphs as graphical representations allow an efficient and flexible application to real-world problems, such as simultaneous localization and mapping or satellite navigation. The proposed algorithms are constructed using an approximate expectation-maximization approach, which justifies their design probabilistically. This expectation-maximization is further generalized to enable adaptive estimation with arbitrary probabilistic models. Evaluating the proposed Adaptive Mixture algorithm in simulated and real-world scenarios demonstrates its versatility and robustness. A synthetic range-based localization shows that it provides reliable estimation results, even under extreme outlier ratios. Real-world satellite navigation experiments prove its robustness in harsh urban environments. The evaluation on indoor simultaneous localization and mapping datasets extends these results to typical robotic use cases. The proposed adaptive estimator provides robust and reliable estimation under various instances of non-Gaussian measurement errors.
68

以穩健估計及長期資料分析觀點探討資本資產定價模型 / On the CAPM from the Views of Robustness and Longitudinal Analysis

呂倩如, Lu Chien-ju Unknown Date (has links)
資本資產定價模型 (CAPM) 由Sharp (1964)、Lintner (1965)及Black (1972)發展出後,近年來已被廣泛的應用於衡量證券之預期報酬率與風險間之關係。一般而言,衡量結果之估計有兩個階段,首先由時間序列分析估計出貝它(beta)係數,然後再檢定廠商或投資組合之平均報酬率與貝它係數之關係。 Fama與MacBeth (1973)利用最小平方法估計貝它係數,再將由橫斷面迴歸方法所得出之斜率係數加以平均後,以統計t-test檢定之。然而以最小平方法估計係數,其估計值很容易受離群值之影響,因此本研究考慮以穩健估計 (robust estimator)來避免此一問題。另外,本研究亦將長期資料分析 (longitudinal data analysis) 引入CAPM裡,期望能檢定貝它係數是否能確實有效地衡量出系統性風險。 論文中以台灣股票市場電子業之實證分析來比較上述不同方法對CAPM的結果,資料蒐集期間為1998年9月至2001年12月之月資料。研究結果顯示出,穩健估計相對於最小平方法就CAPM有較佳的解釋力。而長期資料分析模型更用來衡量債券之超額報酬部分,是否會依上、中、下游或公司之不同而不同。 / The Capital Asset Pricing Model (CAPM) of Sharp (1964), Lintner (1965) and Black (1972) has been widely used in measuring the relationship between the expected return on a security and its risk in the recent years. It consists of two stages to estimate the relationship between risk and expected return. The first one is that betas are estimated from time series regressions, and the second is that the relationship between mean returns and betas is tested across firms or portfolios. Fama and MacBeth (1973) first used ordinary least squares (OLS) to estimate beta and took time series averages of the slope coefficients from monthly cross-sectional regressions in such studies. However it is well known that OLS is sensitive to outliers. Therefore, robust estimators are employed to avoid the problems. Furthermore, the longitudinal data analysis is applied to examine whether betas over time and securities are the valid measure of risk in the CAPM. An empirical study is carried out to present the different approaches. We use the data about the Information and Electronic industry in Taiwan stock market during the period from September 1998 to December 2001. For the time series regression analysis, the robust methods lead to more explanatory power than the OLS results. The linear mixed-effect model is used to examine the effects of different streams and companies for the security excess returns in these data.
69

Méthodes de rééchantillonnage en méthodologie d'enquête

Mashreghi, Zeinab 10 1900 (has links)
Le sujet principal de cette thèse porte sur l'étude de l'estimation de la variance d'une statistique basée sur des données d'enquête imputées via le bootstrap (ou la méthode de Cyrano). L'application d'une méthode bootstrap conçue pour des données d'enquête complètes (en absence de non-réponse) en présence de valeurs imputées et faire comme si celles-ci étaient de vraies observations peut conduire à une sous-estimation de la variance. Dans ce contexte, Shao et Sitter (1996) ont introduit une procédure bootstrap dans laquelle la variable étudiée et l'indicateur de réponse sont rééchantillonnés ensemble et les non-répondants bootstrap sont imputés de la même manière qu'est traité l'échantillon original. L'estimation bootstrap de la variance obtenue est valide lorsque la fraction de sondage est faible. Dans le chapitre 1, nous commençons par faire une revue des méthodes bootstrap existantes pour les données d'enquête (complètes et imputées) et les présentons dans un cadre unifié pour la première fois dans la littérature. Dans le chapitre 2, nous introduisons une nouvelle procédure bootstrap pour estimer la variance sous l'approche du modèle de non-réponse lorsque le mécanisme de non-réponse uniforme est présumé. En utilisant seulement les informations sur le taux de réponse, contrairement à Shao et Sitter (1996) qui nécessite l'indicateur de réponse individuelle, l'indicateur de réponse bootstrap est généré pour chaque échantillon bootstrap menant à un estimateur bootstrap de la variance valide même pour les fractions de sondage non-négligeables. Dans le chapitre 3, nous étudions les approches bootstrap par pseudo-population et nous considérons une classe plus générale de mécanismes de non-réponse. Nous développons deux procédures bootstrap par pseudo-population pour estimer la variance d'un estimateur imputé par rapport à l'approche du modèle de non-réponse et à celle du modèle d'imputation. Ces procédures sont également valides même pour des fractions de sondage non-négligeables. / The aim of this thesis is to study the bootstrap variance estimators of a statistic based on imputed survey data. Applying a bootstrap method designed for complete survey data (full response) in the presence of imputed values and treating them as true observations may lead to underestimation of the variance. In this context, Shao and Sitter (1996) introduced a bootstrap procedure in which the variable under study and the response status are bootstrapped together and bootstrap non-respondents are imputed using the imputation method applied on the original sample. The resulting bootstrap variance estimator is valid when the sampling fraction is small. In Chapter 1, we begin by doing a survey of the existing bootstrap methods for (complete and imputed) survey data and, for the first time in the literature, present them in a unified framework. In Chapter 2, we introduce a new bootstrap procedure to estimate the variance under the non-response model approach when the uniform non-response mechanism is assumed. Using only information about the response rate, unlike Shao and Sitter (1996) which requires the individual response status, the bootstrap response status is generated for each selected bootstrap sample leading to a valid bootstrap variance estimator even for non-negligible sampling fractions. In Chapter 3, we investigate pseudo-population bootstrap approaches and we consider a more general class of non-response mechanisms. We develop two pseudo-population bootstrap procedures to estimate the variance of an imputed estimator with respect to the non-response model and the imputation model approaches. These procedures are also valid even for non-negligible sampling fractions.
70

Modelos parcialmente lineares com erros simétricos autoregressivos de primeira ordem / Symmetric partially linear models with first-order autoregressive errors.

Relvas, Carlos Eduardo Martins 19 April 2013 (has links)
Neste trabalho, apresentamos os modelos simétricos parcialmente lineares AR(1), que generalizam os modelos parcialmente lineares para a presença de erros autocorrelacionados seguindo uma estrutura de autocorrelação AR(1) e erros seguindo uma distribuição simétrica ao invés da distribuição normal. Dentre as distribuições simétricas, podemos considerar distribuições com caudas mais pesadas do que a normal, controlando a curtose e ponderando as observações aberrantes no processo de estimação. A estimação dos parâmetros do modelo é realizada por meio do critério de verossimilhança penalizada, que utiliza as funções escore e a matriz de informação de Fisher, sendo todas essas quantidades derivadas neste trabalho. O número efetivo de graus de liberdade e resultados assintóticos também são apresentados, assim como procedimentos de diagnóstico, destacando-se a obtenção da curvatura normal de influência local sob diferentes esquemas de perturbação e análise de resíduos. Uma aplicação com dados reais é apresentada como ilustração. / In this master dissertation, we present the symmetric partially linear models with AR(1) errors that generalize the normal partially linear models to contain autocorrelated errors AR(1) following a symmetric distribution instead of the normal distribution. Among the symmetric distributions, we can consider heavier tails than the normal ones, controlling the kurtosis and down-weighting outlying observations in the estimation process. The parameter estimation is made through the penalized likelihood by using score functions and the expected Fisher information. We derive these functions in this work. The effective degrees of freedom and asymptotic results are also presented as well as the residual analysis, highlighting the normal curvature of local influence under different perturbation schemes. An application with real data is given for illustration.

Page generated in 0.6399 seconds