• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 6
  • 5
  • 4
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 75
  • 26
  • 16
  • 14
  • 13
  • 13
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Caractérisation des performances minimales d'estimation pour des modèles d'observations non-standards / Minimal performance analysis for non standard estimation models

Ren, Chengfang 28 September 2015 (has links)
Dans le contexte de l'estimation paramétrique, les performances d'un estimateur peuvent être caractérisées, entre autre, par son erreur quadratique moyenne (EQM) et sa résolution limite. La première quantifie la précision des valeurs estimées et la seconde définit la capacité de l'estimateur à séparer plusieurs paramètres. Cette thèse s'intéresse d'abord à la prédiction de l'EQM "optimale" à l'aide des bornes inférieures pour des problèmes d'estimation simultanée de paramètres aléatoires et non-aléatoires (estimation hybride), puis à l'extension des bornes de Cramér-Rao pour des modèles d'observation moins standards. Enfin, la caractérisation des estimateurs en termes de résolution limite est également étudiée. Ce manuscrit est donc divisé en trois parties :Premièrement, nous complétons les résultats de littérature sur les bornes hybrides en utilisant deux bornes bayésiennes : la borne de Weiss-Weinstein et une forme particulière de la famille de bornes de Ziv-Zakaï. Nous montrons que ces bornes "étendues" sont plus précises pour la prédiction de l'EQM optimale par rapport à celles existantes dans la littérature.Deuxièmement, nous proposons des bornes de type Cramér-Rao pour des contextes d'estimation moins usuels, c'est-à-dire : (i) Lorsque les paramètres non-aléatoires sont soumis à des contraintes d'égalité linéaires ou non-linéaires (estimation sous contraintes). (ii) Pour des problèmes de filtrage à temps discret où l'évolution des états (paramètres) est régit par une chaîne de Markov. (iii) Lorsque la loi des observations est différente de la distribution réelle des données.Enfin, nous étudions la résolution et la précision des estimateurs en proposant un critère basé directement sur la distribution des estimées. Cette approche est une extension des travaux de Oh et Kashyap et de Clark pour des problèmes d'estimation de paramètres multidimensionnels. / In the parametric estimation context, estimators performances can be characterized, inter alia, by the mean square error and the resolution limit. The first quantities the accuracy of estimated values and the second defines the ability of the estimator to allow a correct resolvability. This thesis deals first with the prediction the "optimal" MSE by using lower bounds in the hybrid estimation context (i.e. when the parameter vector contains both random and non-random parameters), second with the extension of Cramér-Rao bounds for non-standard estimation problems and finally to the characterization of estimators resolution. This manuscript is then divided into three parts :First, we fill some lacks of hybrid lower bound on the MSE by using two existing Bayesian lower bounds: the Weiss-Weinstein bound and a particular form of Ziv-Zakai family lower bounds. We show that these extended lower bounds are tighter than the existing hybrid lower bounds in order to predict the optimal MSE.Second, we extend Cramer-Rao lower bounds for uncommon estimation contexts. Precisely: (i) Where the non-random parameters are subject to equality constraints (linear or nonlinear). (ii) For discrete-time filtering problems when the evolution of states are defined by a Markov chain. (iii) When the observation model differs to the real data distribution.Finally, we study the resolution of the estimators when their probability distributions are known. This approach is an extension of the work of Oh and Kashyap and the work of Clark to multi-dimensional parameters estimation problems.
62

Analyse de performances en traitement d'antenne : bornes inférieures de l'erreur quadratique moyenne et seuil de résolution limite / Performance analysis in array signal processing. : lower bounds on the mean square error and statistical resolution limit

El Korso, Mohammed Nabil 07 July 2011 (has links)
Ce manuscrit est dédié à l’analyse de performances en traitement d’antenne pour l’estimation des paramètres d’intérêt à l’aide d’un réseau de capteurs. Il est divisé en deux parties :– Tout d’abord, nous présentons l’étude de certaines bornes inférieures de l’erreur quadratique moyenne liées à la localisation de sources dans le contexte champ proche. Nous utilisons la borne de Cramér-Rao pour l’étude de la zone asymptotique (notamment en terme de rapport signal à bruit avec un nombre fini d’observations). Puis, nous étudions d’autres bornes inférieures de l’erreur quadratique moyenne qui permettent de prévoir le phénomène de décrochement de l’erreur quadratique moyenne des estimateurs (on cite, par exemple, la borne de McAulay-Seidman, la borne de Hammersley-Chapman-Robbins et la borne de Fourier Cramér-Rao).– Deuxièmement, nous nous concentrons sur le concept du seuil statistique de résolution limite, c’est-à-dire, la distance minimale entre deux signaux noyés dans un bruit additif qui permet une ”correcte” estimation des paramètres. Nous présentons quelques applications bien connues en traitement d’antenne avant d’étendre les concepts existants au cas de signaux multidimensionnels. Par la suite, nous étudions la validité de notre extension en utilisant un test d’hypothèses binaire. Enfin, nous appliquons notre extension à certains modèles d’observation multidimensionnels / This manuscript concerns the performance analysis in array signal processing. It can bedivided into two parts :- First, we present the study of some lower bounds on the mean square error related to the source localization in the near eld context. Using the Cramér-Rao bound, we investigate the mean square error of the maximum likelihood estimator w.r.t. the direction of arrivals in the so-called asymptotic area (i.e., for a high signal to noise ratio with a nite number of observations.) Then, using other bounds than the Cramér-Rao bound, we predict the threshold phenomena.- Secondly, we focus on the concept of the statistical resolution limit (i.e., the minimum distance between two closely spaced signals embedded in an additive noise that allows a correct resolvability/parameter estimation.) We de ne and derive the statistical resolution limit using the Cramér-Rao bound and the hypothesis test approaches for the mono-dimensional case. Then, we extend this concept to the multidimensional case. Finally, a generalized likelihood ratio test based framework for the multidimensional statistical resolution limit is given to assess the validity of the proposed extension.
63

Estimating the parameters of polynomial phase signals

Farquharson, Maree Louise January 2006 (has links)
Nonstationary signals are common in many environments such as radar, sonar, bioengineering and power systems. The nonstationary nature of the signals found in these environments means that classicalspectralanalysis techniques are notappropriate for estimating the parameters of these signals. Therefore it is important to develop techniques that can accommodate nonstationary signals. This thesis seeks to achieve this by firstly, modelling each component of the signal as having a polynomial phase and by secondly, developing techniques for estimating the parameters of these components. Several approaches can be used for estimating the parameters of polynomial phase signals, eachwithvarying degrees ofsuccess.Criteria to consider in potential estimation algorithms are (i) the signal-to-noise (SNR) ratio threshold of the algorithm, (ii) the amount of computation required for running the algorithm, and (iii) the closeness of the resulting estimates' mean-square errors to the minimum theoretical bound. These criteria will be used to compare the new techniques developed in this thesis with existing techniques. The literature on polynomial phase signal estimation highlights the recurring trade-off between the accuracy of the estimates and the amount of computation required. For example, the Maximum Likelihood (ML) method provides near-optimal estimates above threshold, but also incurs a heavy computational cost for higher order phase signals. On the other hand, multi-linear techniques such as the high-order ambiguity function (HAF) method require little computation, but have a significantly higher SNR threshold than the ML method. Of the existing techniques, the cubic phase (CP) function method is a promising technique because it provides an attractive SNR threshold and computational complexity trade-off. For this reason, the analysis techniques developed in this thesis will be derived from the CP function. A limitation of the CP function is its inability to accurately process phase orders greater than three. Therefore, the first novel contribution to this thesis develops a broadened class of discrete-time higher order phase (HP)functions to address this limitation.This broadened class is achieved by providing a multi-linear extension of the CP function. Monte Carlo simulations are performed to demonstrate the statistical advantage of the HP functions compared to the HAFs. A first order statistical analysis of the HP functions is presented. This analysis verifies the simulation results. The next novel contribution is a technique called the lower SNR cubic phase function (LCPF)method. It is an extension of the CP function, with the extension enabling performance at lower signal-to-noise ratios (SNRs). The improvement of the SNR threshold's performance is achieved by coherently integrating the CP function over a compact interval in the two-dimensional CP function space. The computation of the new algorithm is quite moderate, especially when compared to the ML method. Above threshold, the LCPF method's parameter estimates are asymptotically efficient. Monte Carlo simulation results are presented and a threshold analysis of the algorithm closely predicts the thresholds observed in these results. The next original contribution to this research involves extending the LCPF method so that it is able to process multicomponent cubic phase signals and higher order phase signals. The LCPF method is extended to higher orders by applying a windowing technique as opposed to adjusting the order of the kernel as implemented in the HP function method. To demonstrate the extension of the LCPF method for processing higher order phase signals and multicomponent cubic phase signals, some Monte Carlo simulations are presented. Finally, these estimation techniques are applied to real-worldscenarios in the fields of Power Systems Analysis, Neuroethology and Speech Analysis.
64

Ενίσχυση σημάτων μουσικής υπό το περιβάλλον θορύβου

Παπανικολάου, Παναγιώτης 20 October 2010 (has links)
Στην παρούσα εργασία επιχειρείται η εφαρμογή αλγορίθμων αποθορυβοποίησης σε σήματα μουσικής και η εξαγωγή συμπερασμάτων σχετικά με την απόδοση αυτών ανά μουσικό είδος. Η κύρια επιδίωξη είναι να αποσαφηνιστούν τα βασικά προβλήματα της ενίσχυσης ήχων και να παρουσιαστούν οι διάφοροι αλγόριθμοι που έχουν αναπτυχθεί για την επίλυση των προβλημάτων αυτών. Αρχικά γίνεται μία σύντομη εισαγωγή στις βασικές έννοιες πάνω στις οποίες δομείται η τεχνολογία ενίσχυσης ομιλίας. Στην συνέχεια εξετάζονται και αναλύονται αντιπροσωπευτικοί αλγόριθμοι από κάθε κατηγορία τεχνικών αποθορυβοποίησης, την κατηγορία φασματικής αφαίρεσης, την κατηγορία στατιστικών μοντέλων και αυτήν του υποχώρου. Για να μπορέσουμε να αξιολογήσουμε την απόδοση των παραπάνω αλγορίθμων χρησιμοποιούμε αντικειμενικές μετρήσεις ποιότητας, τα αποτελέσματα των οποίων μας δίνουν την δυνατότητα να συγκρίνουμε την απόδοση του κάθε αλγορίθμου. Με την χρήση τεσσάρων διαφορετικών μεθόδων αντικειμενικών μετρήσεων διεξάγουμε τα πειράματα εξάγοντας μια σειρά ενδεικτικών τιμών που μας δίνουν την ευχέρεια να συγκρίνουμε είτε τυχόν διαφοροποιήσεις στην απόδοση των αλγορίθμων της ίδιας κατηγορίας είτε διαφοροποιήσεις στο σύνολο των αλγορίθμων. Από την σύγκριση αυτή γίνεται εξαγωγή χρήσιμων συμπερασμάτων σχετικά με τον προσδιορισμό των παραμέτρων κάθε αλγορίθμου αλλά και με την καταλληλότητα του κάθε αλγορίθμου για συγκεκριμένες συνθήκες θορύβου και για συγκεκριμένο μουσικό είδος. / This thesis attempts to apply Noise Reduction algorithms to signals of music and draw conclusions concerning the performance of each algorithm for every musical genre. The main aims are to clarify the basic problems of sound enhancement and present the various algorithms developed for solving these problems. After a brief introduction to basic concepts on sound enhancement we examine and analyze various algorithms that have been proposed at times in the literature for speech enhancement. These algorithms can be divided into three main classes: spectral subtractive algorithms, statistical-model-based algorithms and subspace algorithms. In order to evaluate the performance of the above algorithms we use objective measures of quality, the results of which give us the opportunity to compare the performance of each algorithm. By using four different methods of objective measures to conduct the experiments we draw a set of values that facilitate us to make within-class algorithm comparisons and across-class algorithm comparisons. From these comparisons we can draw conclusions on the determination of parameters for each algorithm and the appropriateness of algorithms for specific noise conditions and music genre.
65

Transceiver Design Based on the Minimum-Error-Probability Framework for Wireless Communication Systems

Dutta, Amit Kumar January 2015 (has links) (PDF)
Parameter estimation and signal detection are the two key components of a wireless communication system. They directly impact the bit-error-ratio (BER) performance of the system. Several criteria have been successfully applied for parameter estimation and signal detection. They include maximum likelihood (ML), maximum a-posteriori probability (MAP), least square (LS) and minimum mean square error (MMSE) etc. In the linear detection framework, linear MMSE (LMMSE) and LS are the most popular ones. Nevertheless, these criteria do not necessarily minimize the BER, which is one of the key aspect of any communication receiver design. Thus, minimization of BER is tantamount to an important design criterion for a wireless receiver, the minimum bit/symbol error ratio (MBER/MSER). We term this design criterion as the minimum-error-probability (MEP). In this thesis, parameter estimation and signal detection have been extensively studied based on the MEP framework for various unexplored scenar-ios of a wireless communication system. Thus, this thesis has two broad categories of explorations, first parameter estimation and then signal detection. Traditionally, the MEP criterion has been well studied in the context of the discrete signal detection in the last one decade, albeit we explore this framework for the continuous parameter es-timation. We first use this framework for channel estimation in a frequency flat fading single-input single-output (SISO) system and then extend this framework to the carrier frequency offset (CFO) estimation of multi-user MIMO OFDM system. We observe a reasonably good SNR improvement to the tune of 1 to 2.5 dB at a fixed BER (tentatively at 10−3). In this context, it is extended to the scenario of multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) or MIMO-OFDM with pa-rameter estimation error statistics obtained from LMMSE only and checked its effect at the equalizer design using MEP and LMMSE criteria. In the second exploration of the MEP criterion, it is explored for signal detection in the context of MIMO-relay and MIMO systems. Various low complexity solutions are proposed to alleviate the effect of high computational complexity for the MIMO-relay. We also consider various configurations of relay like cognitive, parallel and multi-hop relaying. We also propose a data trans-mission scheme with a rate of 1/Ns (Ns is the number of antennas at the transmitter) with the help of the MEP criterion to design various components. In all these cases, we obtain considerable BER improvement compared to the existing solutions.
66

ARIMA forecasts of the number of beneficiaries of social security grants in South Africa

Luruli, Fululedzani Lucy 12 1900 (has links)
The main objective of the thesis was to investigate the feasibility of accurately and precisely fore- casting the number of both national and provincial bene ciaries of social security grants in South Africa, using simple autoregressive integrated moving average (ARIMA) models. The series of the monthly number of bene ciaries of the old age, child support, foster care and disability grants from April 2004 to March 2010 were used to achieve the objectives of the thesis. The conclusions from analysing the series were that: (1) ARIMA models for forecasting are province and grant-type spe- ci c; (2) for some grants, national forecasts obtained by aggregating provincial ARIMA forecasts are more accurate and precise than those obtained by ARIMA modelling national series; and (3) for some grants, forecasts obtained by modelling the latest half of the series were more accurate and precise than those obtained from modelling the full series. / Mathematical Sciences / M.Sc. (Statistics)
67

Multi-Antenna Communication Receivers Using Metaheuristics and Machine Learning Algorithms

Nagaraja, Srinidhi January 2013 (has links) (PDF)
In this thesis, our focus is on low-complexity, high-performance detection algorithms for multi-antenna communication receivers. A key contribution in this thesis is the demonstration that efficient algorithms from metaheuristics and machine learning can be gainfully adapted for signal detection in multi- antenna communication receivers. We first investigate a popular metaheuristic known as the reactive tabu search (RTS), a combinatorial optimization technique, to decode the transmitted signals in large-dimensional communication systems. A basic version of the RTS algorithm is shown to achieve near-optimal performance for 4-QAM in large dimensions. We then propose a method to obtain a lower bound on the BER performance of the optimal detector. This lower bound is tight at moderate to high SNRs and is useful in situations where the performance of optimal detector is needed for comparison, but cannot be obtained due to very high computational complexity. To improve the performance of the basic RTS algorithm for higher-order modulations, we propose variants of the basic RTS algorithm using layering and multiple explorations. These variants are shown to achieve near-optimal performance in higher-order QAM as well. Next, we propose a new receiver called linear regression of minimum mean square error (MMSE) residual receiver (referred to as LRR receiver). The proposed LRR receiver improves the MMSE receiver by learning a linear regression model for the error of the MMSE receiver. The LRR receiver uses pilot data to estimate the channel, and then uses locally generated training data (not transmitted over the channel) to find the linear regression parameters. The LRR receiver is suitable for applications where the channel remains constant for a long period (slow-fading channels) and performs well. Finally, we propose a receiver that uses a committee of linear receivers, whose parameters are estimated from training data using a variant of the AdaBoost algorithm, a celebrated supervised classification algorithm in ma- chine learning. We call our receiver boosted MMSE (B-MMSE) receiver. We demonstrate that the performance and complexity of the proposed B-MMSE receiver are quite attractive for multi-antenna communication receivers.
68

[en] ADVANCED TRANSMIT PROCESSING FOR MIMO DOWNLINK CHANNELS WITH 1-BIT QUANTIZATION AND OVERSAMPLING AT THE RECEIVERS / [pt] PROCESSAMENTO AVANÇADO DE TRANSMISSÃO PARA CANAIS DE DOWNLINK MIMO COM QUANTIZAÇÃO DE 1 BIT E SOBREAMOSTRAGEM NOS RECEPTORES

10 September 2020 (has links)
[pt] IoT refere-se a um sistema de dispositivos de computação inter-relacionados que visa transferir dados através de uma rede sem exigir interação humanohumano ou humano-para-computador. Esses sistemas de comunicação modernos, exigem restrições de baixo consumo de energia e baixa complexidade no receptor. Nesse sentido, o conversor analógico-digital representa um gargalo para o desenvolvimento das aplicações dessas novas tecnologias, pois apresenta alto consumo de energia devido à sua alta resolução. A pesquisa realizada em relação aos conversores analógico-digitais com quantização grosseira mostrou que esses dispositivos são promissores para o projeto de futuros sistemas de comunicação. Para equilibrar a perda de informações, devido à quantização grosseira, a resolução no tempo é aumentada através da superamostragem. Esta tese considera um sistema com quantização de 1 bit e superamostragem no receptor com um canal de downlink MIMO multiusuário com banda ilimitada e apresenta, como principal contribuição, a nova modulação de cruzamento de zeros que implica que a informação é transmitida no instante de tempo zero-crossings. Este método é usado para a pré-codificação temporal através da otimização do design da forma de onda para dois pré-codificadores diferentes, a maximização temporal da distância mínima até o limiar de decisão com forçamento a zero espacial e a pré-codificação MMSE no espácio-temporal. Os resultados da simulação mostram que a abordagem de cruzamento de zeros proposta supera o estado da arte em termos da taxa de erro de bits para os dois pré-codificadores estudados. Além disso, essa nova modulação reduz a complexidade computacional, permite dispositivos de complexidade muito baixa e economiza recursos de banda em comparação com o método mais avançado. Análises adicionais mostram que a abordagem do cruzamento de zeros é benéfica em comparação com o método mais avançado em termos de maior distância mínima até o limiar de decisão e menor MSE para sistemas com limitações de banda. Além disso, foi desenvolvido um esquema de mapeamento de bits para modulação de cruzamento por zero, semelhante à codificação de Gray para reduzir ainda mais a taxa de erro de bits. / [en] The IoT refers to a system of interrelated computing devises which aims to transfer data over a network without requiring human-to-human or humanto- computer interaction. This Modern communication systems demand restrictions of low energy consumption and low complexity in the receiver. In this sense, the analog-to-digital converter represents a bottleneck for the development of the applications of these new technologies since it has a high energy consumption due to its high resolution. The research carried out concerning to the analog-to-digital converters with coarse quantization has shown that such devices are promising for the design of future communication systems. To balance the loss of information, due to the coarse quantization, the resolution in time is increased through oversampling. This thesis considers a system with 1-bit quantization and oversampling at the receiver with a bandlimited multiuser MIMO downlink channel and introduces, as the main contribution, the novel zero-crossing modulation which implies that the information is conveyed within the time instant of the zero-crossings. This method is used for the temporal precoding through the waveform design optimization for two different precoders, the temporal maximization of the minimum distance to the decision threshold with spatial zero forcing and the space-time MMSE precoding. The simulation results show that the proposed zero-crossing approach outperforms the state-of-theart in terms of the bit error rate for both precoders studied. In addition, this novel modulation reduces the computational complexity, allows very low complexity devices and saves band resources in comparison to the state-ofthe- art method. Additional analyses show that the zero-crossing approach is beneficial in comparison to the state-of-the-art method in terms of greater minimum distance to the decision threshold and lower MSE for systems with band limitations. Moreover, it was devised a bit-mapping scheme for zero-crossing modulation, similar to Gray-coding to further reduce the bit error rate.
69

Essays in dynamic panel data models and labor supply

Nayihouba, Kolobadia Ada 08 1900 (has links)
Cette thèse est organisée en trois chapitres. Les deux premiers proposent une approche régularisée pour l’estimation du modèle de données de panel dynamique : l’estimateur GMM et l’estimateur LIML. Le dernier chapitre de la thèse est une application de la méthode de régularisation à l’estimation des élasticités de l’offre de travail en utilisant des modèles de pseudo-données de panel. Dans un modèle de panel dynamique, le nombre de conditions de moments augmente rapidement avec la dimension temporelle du panel conduisant à une matrice de covariance des instruments de grande dimension. L’inversion d’une telle matrice pour calculer l’estimateur affecte négativement les propriétés de l’estimateur en échantillon fini. Comme solution à ce problème, nous proposons une approche par la régularisation qui consiste à utiliser une inverse généralisée de la matrice de covariance au lieu de son inverse classique. Trois techniques de régularisation sont utilisées : celle des composantes principales, celle de Tikhonov qui est basée sur le Ridge régression (aussi appelée Bayesian shrinkage) et enfin celle de Landweber Fridman qui est une méthode itérative. Toutes ces techniques introduisent un paramètre de régularisation qui est similaire au paramètre de lissage dans les régressions non paramétriques. Les propriétés en echantillon fini de l’estimateur régularisé dépend de ce paramètre qui doit être sélectionné parmis plusieurs valeurs potentielles. Dans le premier chapitre (co-écrit avec Marine Carrasco), nous proposons l’estimateur GMM régularisé du modèle de panel dynamique. Sous l’hypothèse que le nombre d’individus et de périodes du panel tendent vers l’infini, nous montrons que nos estimateurs sont convergents and assymtotiquement normaux. Nous dérivons une méthode empirique de sélection du paramètrede régularisation basée sur une expansion de second ordre du l’erreur quadratique moyenne et nous démontrons l’optimalité de cette procédure de sélection. Les simulations montrent que la régularisation améliore les propriétés de l ’estimateur GMM classique. Comme application empirique, nous avons analysé l’effet du développement financier sur la croissance économique. Dans le deuxième chapitre (co-écrit avec Marine Carrasco), nous nous intéressons à l’estimateur LIML régularisé du modèle de données de panel dynamique. L’estimateur LIML est connu pour avoir de meilleures propriétés en échantillon fini que l’estimateur GMM mais son utilisation devient problématique lorsque la dimension temporelle du panel devient large. Nous dérivons les propriétes assymtotiques de l’estimateur LIML régularisé sous l’hypothèse que le nombre d’individus et de périodes du panel tendent vers l’infini. Une procédure empirique de sélection du paramètre de régularisation est aussi proposée. Les bonnes performances de l’estimateur régularisé par rapport au LIML classique (non régularisé), au GMM classique ainsi que le GMM régularisé sont confirmées par des simulations. Dans le dernier chapitre, je considère l’estimation des élasticités d’offre de travail des hommes canadiens. L’hétérogéneité inobservée ainsi que les erreurs de mesures sur les salaires et les revenus sont connues pour engendrer de l’endogéneité quand on estime les modèles d’offre de travail. Une solution fréquente à ce problème d’endogéneité consiste à régrouper les données sur la base des carastéristiques observables et d’ éffectuer les moindres carrées pondérées sur les moyennes des goupes. Il a été démontré que cet estimateur est équivalent à l’estimateur des variables instrumentales sur les données individuelles avec les indicatrices de groupe comme instruments. Donc, en présence d’un grand nombre de groupe, cet estimateur souffre de biais en échantillon fini similaire à celui de l’estimateur des variables instrumentales quand le nombre d’instruments est élevé. Profitant de cette correspondance entre l’estimateur sur les données groupées et l’estimateur des variables instrumentales sur les données individuelles, nous proposons une approche régularisée à l’estimation du modèle. Cette approche conduit à des élasticités substantiellement différentes de ceux qu’on obtient en utilisant l’estimateur sur données groupées. / This thesis is organized in three chapters. The first two chapters propose a regularization approach to the estimation of two estimators of the dynamic panel data model : the Generalized Method of Moment (GMM) estimator and the Limited Information Maximum Likelihood (LIML) estimator. The last chapter of the thesis is an application of regularization to the estimation of labor supply elasticities using pseudo panel data models. In a dynamic panel data model, the number of moment conditions increases rapidly with the time dimension, resulting in a large dimensional covariance matrix of the instruments. Inverting this large dimensional matrix to compute the estimator leads to poor finite sample properties. To address this issue, we propose a regularization approach to the estimation of such models where a generalized inverse of the covariance matrix of the intruments is used instead of its usual inverse. Three regularization schemes are used : Principal components, Tikhonov which is based on Ridge regression (also called Bayesian shrinkage) and finally Landweber Fridman which is an iterative method. All these methods involve a regularization parameter which is similar to the smoothing parameter in nonparametric regressions. The finite sample properties of the regularized estimator depends on this parameter which needs to be selected between many potential values. In the first chapter (co-authored with Marine Carrasco), we propose the regularized GMM estimator of the dynamic panel data models. Under double asymptotics, we show that our regularized estimators are consistent and asymptotically normal provided that the regularization parameter goes to zero slower than the sample size goes to infinity. We derive a data driven selection of the regularization parameter based on an approximation of the higher-order Mean Square Error and show its optimality. The simulations confirm that regularization improves the properties of the usual GMM estimator. As empirical application, we investigate the effect of financial development on economic growth. In the second chapter (co-authored with Marine Carrasco), we propose the regularized LIML estimator of the dynamic panel data model. The LIML estimator is known to have better small sample properties than the GMM estimator but its implementation becomes problematic when the time dimension of the panel becomes large. We derive the asymptotic properties of the regularized LIML under double asymptotics. A data-driven procedure to select the parameter of regularization is proposed. The good performances of the regularized LIML estimator over the usual (not regularized) LIML estimator, the usual GMM estimator and the regularized GMM estimator are confirmed by the simulations. In the last chapter, I consider the estimation of the labor supply elasticities of Canadian men through a regularization approach. Unobserved heterogeneity and measurement errors on wage and income variables are known to cause endogeneity issues in the estimation of labor supply models. A popular solution to the endogeneity issue is to group data in categories based on observable characteristics and compute the weighted least squares at the group level. This grouping estimator has been proved to be equivalent to instrumental variables (IV) estimator on the individual level data using group dummies as intruments. Hence, in presence of large number of groups, the grouping estimator exhibites a small bias similar to the one of the IV estimator in presence of many instruments. I take advantage of the correspondance between grouping estimators and the IV estimator to propose a regularization approach to the estimation of the model. Using this approach leads to wage elasticities that are substantially different from those obtained through grouping estimators.
70

Régression non-paramétrique pour variables fonctionnelles / Non parametric regression for functional data

Elamine, Abdallah Bacar 23 March 2010 (has links)
Cette thèse se décompose en quatre parties auxquelles s'ajoute une présentation. Dans un premier temps, on expose les outils mathématiques essentiels à la compréhension des prochains chapitres. Dans un deuxième temps, on s'intéresse à la régression non paramétrique locale pour des données fonctionnelles appartenant à un espace de Hilbert. On propose, tout d'abord, un estimateur de l'opérateur de régression. La construction de cet estimateur est liée à la résolution d'un problème inverse linéaire. On établit des bornes de l'erreur quadratique moyenne (EQM) de l'estimateur de l'opérateur de régression en utilisant une décomposition classique. Cette EQM dépend de la fonction de petite boule de probabilité du régresseur au sujet de laquelle des hypothèses de type Gamma-variation sont posées. Dans le chapitre suivant, on reprend le travail élaboré dans le précédent chapitre en se plaçant dans le cadre de données fonctionnelles appartenant à un espace semi-normé. On établit des bornes de l'EQM de l'estimateur de l'opérateur de régression. Cette EQM peut être vue comme une fonction de la fonction de petite boule de probabilité. Dans le dernier chapitre, on s'intéresse à l'estimation de la fonction auxiliaire associée à la fonction de petite boule de probabilité. D'abord, on propose un estimateur de cette fonction auxiliare. Ensuite, on établit la convergence en moyenne quadratique et la normalité asymptotique de cet estimateur. Enfin, par des simulations, on étudie le comportement de de cet estimateur au voisinage de zéro. / This thesis is divided in four sections with an additionnal presentation. In the first section, We expose the essential mathematics skills for the comprehension of the next sections. In the second section, we adress the problem of local non parametric with functional inputs. First, we propose an estimator of the unknown regression function. The construction of this estimator is related to the resolution of a linear inverse problem. Using a classical method of decomposition, we establish a bound for the mean square error (MSE). This bound depends on the small ball probability of the regressor which is assumed to belong to the class of Gamma varying functions. In the third section, we take again the work done in the preceding section by being situated in the frame of data belonging to a semi-normed space with infinite dimension. We establish bound for the MSE of the regression operator. This MSE can be seen as a function of the small ball probability function. In the last section, we interest to the estimation of the auxiliary function. Then, we establish the convergence in mean square and the asymptotic normality of the estimator. At last, by simulations, we study the bahavour of this estimator in a neighborhood of zero.

Page generated in 0.0663 seconds