51 |
Positionnement robuste et précis de réseaux d’images / Robust and accurate calibration of camera networksMoulon, Pierre 10 January 2014 (has links)
Calculer une représentation 3D d'une scène rigide à partir d'une collection d'images est aujourd'hui possible grâce aux progrès réalisés par les méthodes de stéréo-vision multi-vues, et ce avec un simple appareil photographique. Le principe de reconstruction, découlant de travaux de photogrammétrie, consiste à recouper les informations provenant de plusieurs images, prises de points de vue différents, pour identifier les positions et orientations relatives de chaque cliché. Une fois les positions et orientations de caméras déterminées (calibration externe), la structure de la scène peut être reconstruite. Afin de résoudre le problème de calcul de la structure à partir du mouvement des caméras (Structure-from-Motion), des méthodes séquentielles et globales ont été proposées. Par nature, les méthodes séquentielles ont tendance à accumuler les erreurs. Cela donne lieu le plus souvent à des trajectoires de caméras qui dérivent et, lorsque les photos sont acquises autour d'un objet, à des reconstructions où les boucles ne se referment pas. Au contraire, les méthodes globales considèrent le réseau de caméras dans son ensemble. La configuration de caméras est recherchée et optimisée pour conserver au mieux l'ensemble des contraintes de cyclicité du réseau. Des reconstructions de meilleure qualité peuvent être obtenues, au détriment toutefois du temps de calcul. Cette thèse propose d'analyser des problèmes critiques au cœur de ces méthodes de calibration externe et de fournir des solutions pour améliorer leur performance (précision, robustesse, vitesse) et leur facilité d'utilisation (paramétrisation restreinte).Nous proposons tout d'abord un algorithme de suivi de points rapide et efficace. Nous montrons ensuite que l'utilisation généralisée de l'estimation robuste de modèles paramétriques a contrario permet de libérer l'utilisateur du réglage de seuils de détection, et d'obtenir une chaine de reconstruction qui s'adapte automatiquement aux données. Puis dans un second temps, nous utilisons ces estimations robustes adaptatives et une formulation du problème qui permet des optimisations convexes pour construire une chaine de calibration globale capable de passer à l'échelle. Nos expériences démontrent que les estimations identifiées a contrario améliorent de manière notable la qualité d'estimation de la position et de l'orientation des clichés, tout en étant automatiques et sans paramètres, et ce même sur des réseaux de caméras complexes. Nous proposons enfin d'améliorer le rendu visuel des reconstructions en proposant une optimisation convexe de la consistance colorée entre images / To compute a 3D representation of a rigid scene from a collection of pictures is now possible thanks to the progress made by the multiple-view stereovision methods, even with a simple camera. The reconstruction process, arising from photogrammetry, consists in integrating information from multiple images taken from different viewpoints in order to identify the relative positions and orientations. Once the positions and orientations (external calibration) of the cameras are retrieved, the structure of the scene can be reconstructed. To solve the problem of calculating the Structure from Motion (SfM), sequential and global methods have been proposed. By nature, sequential methods tend to accumulate errors. This is observable in trajectories of cameras that are subject to drift error. When pictures are acquired around an object it leads to reconstructions where the loops do not close. In contrast, global methods consider the network of cameras as a whole. The configuration of cameras is searched and optimized in order to preserve at best the constraints of the cyclical network. Reconstructions of better quality can be obtained, but at the expense of computation time. This thesis aims at analyzing critical issues at the heart of these methods of external calibration and at providing solutions to improve their performance(accuracy , robustness and speed) and their ease of use (restricted parametrization).We first propose a fast and efficient feature tracking algorithm. We then show that the widespread use of a contrario robust estimation of parametric models frees the user from choosing detection thresholds, and allows obtaining a reconstruction pipeline that automatically adapts to the data. Then in a second step, we use the adaptive robust estimation and a series of convex optimizations to build a scalable global calibration chain. Our experiments show that the a contrario based estimations improve significantly the quality of the pictures positions and orientations, while being automatic and without parameters, even on complex camera networks. Finally, we propose to improve the visual appearance of the reconstruction by providing a convex optimization to ensure the color consistency between images
|
52 |
Contribution à l’estimation robuste de modèles dynamiques : Application à la commande de systèmes dynamiques complexes. / Contribution in the robust estimate of dynamic models. Application in the order of complex dynamic systems.Corbier, Christophe 29 November 2012 (has links)
L'identification des systèmes dynamiques complexes reste une préoccupation lorsque les erreurs de prédictions contiennent des outliers d'innovation. Ils ont pour effet de détériorer le modèle estimé, si le critère d'estimation est mal choisi et mal adapté. Cela a pour conséquences de contaminer la distribution de ces erreurs, laquelle présente des queues épaisses et s'écarte de la distribution normale. Pour résoudre ce problème, il existe une classe d'estimateurs, dits robustes, moins sensibles aux outliers, qui traitent d'une manière plus « douce » la transition entre résidus de niveaux très différents. Les M-estimateurs de Huber font partie de cette classe. Ils sont associés à un mélange des normes L2 et L1, liés à un modèle de distribution gaussienne perturbée, dit gross error model. A partir de ce cadre formel, nous proposons dans cette thèse, un ensemble d'outils d'estimation et de validation de modèles paramétriques linéaires et pseudo-linéaires boîte-noires, avec extension de l'intervalle de bruit dans les petites valeurs de la constante d'accord de la norme de Huber. Nous présentons ainsi les propriétés de convergence du critère d'estimation et de l'estimateur robuste. Nous montrons que l'extension de l'intervalle de bruit réduit la sensibilité du biais de l'estimateur et améliore la robustesse aux points de levage. Pour un type de modèle pseudo-linéaire, il est présenté un nouveau contexte dit L-FTE, avec une nouvelle méthode de détermination de L, dans le but d'établir les linéarisations du gradient et du Hessien du critère d'estimation, ainsi que de la matrice de covariance asymptotique de l'estimateur. De ces relations, une version robuste du critère de validation FPE est établie et nous proposons un nouvel outil d'aide au choix de modèle estimé. Des expérimentations sur des processus simulés et réels sont présentées et analysées.L'identification des systèmes dynamiques complexes reste une préoccupation lorsque les erreurs de prédictions contiennent des outliers d'innovation. Ils ont pour effet de détériorer le modèle estimé, si le critère d'estimation est mal choisi et mal adapté. Cela a pour conséquences de contaminer la distribution de ces erreurs, laquelle présente des queues épaisses et s'écarte de la distribution normale. Pour résoudre ce problème, il existe une classe d'estimateurs, dits robustes, moins sensibles aux outliers, qui traitent d'une manière plus « douce » la transition entre résidus de niveaux très différents. Les M-estimateurs de Huber font partie de cette classe. Ils sont associés à un mélange des normes L2 et L1, liés à un modèle de distribution gaussienne perturbée, dit gross error model. A partir de ce cadre formel, nous proposons dans cette thèse, un ensemble d'outils d'estimation et de validation de modèles paramétriques linéaires et pseudo-linéaires boîte-noires, avec extension de l'intervalle de bruit dans les petites valeurs de la constante d'accord de la norme de Huber. Nous présentons ainsi les propriétés de convergence du critère d'estimation et de l'estimateur robuste. Nous montrons que l'extension de l'intervalle de bruit réduit la sensibilité du biais de l'estimateur et améliore la robustesse aux points de levage. Pour un type de modèle pseudo-linéaire, il est présenté un nouveau contexte dit L-FTE, avec une nouvelle méthode de détermination de L, dans le but d'établir les linéarisations du gradient et du Hessien du critère d'estimation, ainsi que de la matrice de covariance asymptotique de l'estimateur. De ces relations, une version robuste du critère de validation FPE est établie et nous proposons un nouvel outil d'aide au choix de modèle estimé. Des expérimentations sur des processus simulés et réels sont présentées et analysées. / Complex dynamic systems identification remains a concern when prediction errors contain innovation outliers. They have the effect to damage the estimated model if the estimation criterion is badly chosen and badly adapted. The consequence is the contamination of the distribution of these errors; this distribution presents heavy tails and deviates of the normal distribution. To solve this problem, there is a robust estimator's class, less sensitive to the outliers, which treat the transition between residuals of very different levels in a softer way. The Huber's M-estimators belong to this class. They are associated to a mixed L2 - L1 norm, related to a disturbed Gaussian distribution model, namely gross error model. From this formal context, in this thesis we propose a set of estimation and validation tools of black-box linear and pseudo-linear models, with extension of the noise interval to low values of the tuning constant in the Huber's norm. We present the convergence properties of the robust estimation criterion and the robust estimator. We show that the extension of the noise interval reduces the sensitivity of the bias of the estimator and improves the robustness to the leverage points. Moreover, for a pseudo-linear model structure, we present a new context, named L-FTE, with a new method to determine L, in order to linearize the gradient and the Hessien of estimation criterion and the asymptotic covariance matrix of the estimator. From these expressions, a robust version of the FPE validation criterion is established and we propose a new decisional tool for the estimated model choice. Experiments on simulated and real systems are presented and analyzed.
|
53 |
Théorie des valeurs extrêmes et applications en environnement / Extreme value theory and applications in environmentRietsch, Théo 14 November 2013 (has links)
Les deux premiers chapitres de cette thèse s'attachent à répondre à des questions cruciales en climatologie. La première est de savoir si un changement dans le comportement des extrêmes de température peut être détecté entre le début du siècle et aujourd'hui. Nous utilisons la divergence de Kullback Leibler, que nous adaptons au contexte des extrêmes. Des résultats théoriques et des simulations permettent de valider notre approche. La deuxième question est de savoir où retirer des stations météo pour perdre le moins d'information sur le comportement des extrêmes. Un algorithme, le Query By Committee, est développé puis appliqué à un jeu de données réelles. Le dernier chapitre de la thèse traite de l'estimation robuste du paramètre de queue d'une distribution de type Weibull en présence de co-variables aléatoires. Nous proposons un estimateur robuste basé sur un critère de minimisation de la divergence entre deux densités et étudions ses propriétés. / In the first two chapters, we try to answer two questions that are critical in climatology. The first one is to know whether a change in the behaviour of the temperature extremes occured between the beginning of the century and today. We suggest to use a version of the Kullback Leibler divergence tailored for the extreme value context. We provide some theoretical and simulation results to justify our approach. The second question is to decide where to remove stations from a network to lose the least information about the behaviour of the extremes. An algorithm called the Query By Committee is developed and applied to real data. The last chapter of the thesis deals with a more theoretical subject which is the robust estimation of a Weibull type tail index in presence of random covariates. We propose a robust estimator based on a criterion ofminimization of the divergence between two densities and study its properties.
|
54 |
Quelques contributions à l'estimation de grandes matrices de précision / Some contributions to large precision matrix estimationBalmand, Samuel 27 June 2016 (has links)
Sous l'hypothèse gaussienne, la relation entre indépendance conditionnelle et parcimonie permet de justifier la construction d'estimateurs de l'inverse de la matrice de covariance -- également appelée matrice de précision -- à partir d'approches régularisées. Cette thèse, motivée à l'origine par la problématique de classification d'images, vise à développer une méthode d'estimation de la matrice de précision en grande dimension, lorsque le nombre $n$ d'observations est petit devant la dimension $p$ du modèle. Notre approche repose essentiellement sur les liens qu'entretiennent la matrice de précision et le modèle de régression linéaire. Elle consiste à estimer la matrice de précision en deux temps. Les éléments non diagonaux sont tout d'abord estimés en considérant $p$ problèmes de minimisation du type racine carrée des moindres carrés pénalisés par la norme $ell_1$.Les éléments diagonaux sont ensuite obtenus à partir du résultat de l'étape précédente, par analyse résiduelle ou maximum de vraisemblance. Nous comparons ces différents estimateurs des termes diagonaux en fonction de leur risque d'estimation. De plus, nous proposons un nouvel estimateur, conçu de sorte à tenir compte de la possible contamination des données par des {em outliers}, grâce à l'ajout d'un terme de régularisation en norme mixte $ell_2/ell_1$. L'analyse non-asymptotique de la convergence de notre estimateur souligne la pertinence de notre méthode / Under the Gaussian assumption, the relationship between conditional independence and sparsity allows to justify the construction of estimators of the inverse of the covariance matrix -- also called precision matrix -- from regularized approaches. This thesis, originally motivated by the problem of image classification, aims at developing a method to estimate the precision matrix in high dimension, that is when the sample size $n$ is small compared to the dimension $p$ of the model. Our approach relies basically on the connection of the precision matrix to the linear regression model. It consists of estimating the precision matrix in two steps. The off-diagonal elements are first estimated by solving $p$ minimization problems of the type $ell_1$-penalized square-root of least-squares. The diagonal entries are then obtained from the result of the previous step, by residual analysis of likelihood maximization. This various estimators of the diagonal entries are compared in terms of estimation risk. Moreover, we propose a new estimator, designed to consider the possible contamination of data by outliers, thanks to the addition of a $ell_2/ell_1$ mixed norm regularization term. The nonasymptotic analysis of the consistency of our estimator points out the relevance of our method
|
55 |
Relevé et consolidation de nuages de points issus de multiples capteurs pour la numérisation 3D du patrimoine / Acquisition and registration of point clouds using multiple sensors for 3D digitization of built heritageLachat, Elise 17 June 2019 (has links)
La numérisation 3D du patrimoine bâti est un procédé qui s’inscrit dans de multiples applications (documentation, visualisation, etc.), et peut tirer profit de la diversité des techniques de mesure disponibles. Afin d’améliorer la complétude et la qualité des livrables, de plus en plus de projets de numérisation s’appuient sur la combinaison de nuages de points provenant de différentes sources. La connaissance des performances propres aux différents capteurs, ainsi que de la qualité de leurs mesures, est alors souhaitable. Par la suite, plusieurs pistes peuvent être explorées en vue d’intégrer des nuages hétérogènes au sein d’un même projet, de leur recalage à la modélisation finale. Une approche pour le recalage simultané de plusieurs nuages de points est exposée dans ces travaux. La gestion de potentielles fautes parmi les observations, ou de bruit de mesure inhérent à certaines techniques de levé, est envisagée à travers l’ajout d’estimateurs robustes dans la méthodologie de recalage. / Three dimensional digitization of built heritage is involved in a wide range of applications (documentation, visualization, etc.), and may take advantage of the diversity of measurement techniques available. In order to improve the completeness as well as the quality of deliverables, more and more digitization projects rely on the combination of data coming from different sensors. To this end, the knowledge of sensor performances along with the quality of the measurements they produce is recommended. Then, different solutions can be investigated to integrate heterogeneous point clouds within a same project, from their registration to the modeling steps. A global approach for the simultaneous registration of multiple point clouds is proposed in this work, where the introduction of individual weights for each dataset is foreseen. Moreover, robust estimators are introduced in the registration framework, in order to deal with potential outliers or measurement noise among the data.
|
56 |
Détection et segmentation robustes de cibles mobiles par analyse du mouvement résiduel, à l'aide d'une unique caméra, dans un contexte industriel. Une application à la vidéo-surveillance automatique par drone. / A robust moving target detection by the analysis of the residual motion, with a mono-camera, in an industrial context. An application to the automatic aerial video surveillance.Pouzet, Mathieu 05 November 2015 (has links)
Nous proposons dans cette thèse une méthode robuste de détection d’objets mobiles depuis une caméra en mouvement montée sur un vecteur aérien de type drone ou hélicoptère. Nos contraintes industrielles sont particulièrement fortes : robustesse aux grands mouvements de la caméra, robustesse au flou de focus ou de bougé, et précision dans la détection et segmentation des objets mobiles. De même, notre solution doit être optimisée afin de ne pas être trop consommatrice en termes de puissance de calcul. Notre solution consiste en la compensation du mouvement global, résultant du mouvement de la caméra, puis en l’analyse du mouvement résiduel existant entre les images pour détecter et segmenter les cibles mobiles. Ce domaine a été particulièrement exploré dans la littérature, ce qui se traduit par une richesse des méthodes proposées fondamentalement différentes. Après en avoir étudié un certain nombre, nous nous sommes aperçus qu’elles avaient toutes un domaine d’applications restreint, malheureusement incompatible avec nos préoccupations industrielles. Pour pallier à ce problème, nous proposons une méthodologie consistant à analyser les résultats des méthodes de l’état de l’art de manière à en comprendre les avantages et inconvénients de chacune. Puis, des hybridations de ces méthodes sont alors mis en place. Ainsi, nous proposons trois étapes successives : la compensation du mouvement entre deux images successives, l’élaboration d’un arrière plan de la scène afin de pouvoir segmenter de manière correcte les objets mobiles dans l’image et le filtrage de ces détections par confrontation entre le mouvement estimé lors de la première étape et le mouvement résiduel estimé par un algorithme local. La première étape consiste en l’estimation du mouvement global entre deux images à l’aide d’une méthode hybride composée d’un algorithme de minimisation ESM et d’une méthode de mise en correspondance de points d’intérêt Harris. L’approche pyramidale proposée permet d’optimiser les temps de calcul et les estimateursrobustes (M-Estimateur pour l’ESM et RANSAC pour les points d’intérêt) permettent de répondre aux contraintes industrielles. La deuxième étape établit un arrière plan de la scène à l’aide d’une méthode couplant les résultats d’une différence d’images successives (après compensation) et d’une segmentation en régions. Cette méthode réalise une fusion entre les informations statiques et dynamiques de l’image. Cet arrière plan est ensuite comparé avec l’image courante afin de détecter les objets mobiles. Enfin, la dernière étape confronte les résultats de l’estimation de mouvement global avec le mouvement résiduel estimé par un flux optique local Lucas-Kanade afin de valider les détections obtenues lors de la seconde étape. Les expériences réalisées dans ce mémoire sur de nombreuses séquences de tests (simulées ou réelles) permettent de valider la solution retenue. Nous montrons également diverses applications possibles de notre méthode proposée. / We propose a robust method about moving target detection from a moving UAV-mounted or helicopter-mounted camera. The industrial solution has to be robust to large motion of the camera, focus and motion blur in the images, and need to be accurate in terms of the moving target detection and segmentation. It does not have to need a long computation time. The proposed solution to detect the moving targets consists in the global camera motion compensation, and the residual motion analysis, that exists between the successive images. This research domain has been widely explored in the literature, implying lots of different proposed methods. The study of these methods show us that they all have a different and limited application scope, incompatible with our industrial constraints. To deal with this problem, we propose a methodology consisting in the analysis of the state-of-the-art method results, to extract their strengths and weaknesses. Then we propose to hybrid them. Therefore, we propose three successive steps : the inter-frame motion compensation, thecreation of a background in order to correctly detect the moving targets in the image and then the filtering of these detections by a comparison between the estimated global motion of the first step and the residual motion estimated by a local algorithm. The first step consists in the estimation of the global motion between two successive images thanks to a hybrid method composed of a minimization algorithm (ESM) and a feature-based method (Harris matching). The pyramidal implementation allows to optimize the computation time and the robust estimators (M-Estimator for the ESM algorithm and RANSAC for the Harris matching) allow to deal with the industrial constraints. The second step createsa background image using a method coupling the results of an inter-frame difference (after the global motion compensation) and a region segmentation. This method merges the static and dynamic information existing in the images. This background is then compared with the current image to detect the moving targets. Finally, the last step compares the results of the global motion estimation with the residual motion estimated by a Lucas-Kanade optical flow in order to validate the obtained detections of the second step. This solution has been validated after an evaluation on a large number of simulated and real sequences of images. Additionally, we propose some possible applications of theproposed method.
|
57 |
Robust Wireless Communications with Applications to Reconfigurable Intelligent SurfacesBuvarp, Anders Martin 12 January 2024 (has links)
The concepts of a digital twin and extended reality have recently emerged, which require a massive amount of sensor data to be transmitted with low latency and high reliability. For low-latency communications, joint source-channel coding (JSCC) is an attractive method for error correction coding and compared to highly complex digital systems that are currently in use. I propose the use of complex-valued and quaternionic neural networks (QNN) to decode JSCC codes, where the complex-valued neural networks show a significant improvement over real-valued networks and the QNNs have an exceptionally high performance. Furthermore, I propose mapping encoded JSCC code words to the baseband of the frequency domain in order to enable time/frequency synchronization as well as to mitigate fading using robust estimation theory. Additionally, I perform robust statistical signal processing on the high-dimensional JSCC code showing significant noise immunity with drastic performance improvements at low signal-to-noise ratio (SNR) levels. The performance of the proposed JSCC codes is within 5 dB of the optimal performance theoretically achievable and outperforms the maximum likelihood decoder at low SNR while exhibiting the smallest possible latency. I designed a Bayesian minimum mean square error estimator for decoding high-dimensional JSCC codes achieving 99.96% accuracy. With the recent introduction of electromagnetic reconfigurable intelligent surfaces (RIS), a paradigm shift is currently taking place in the world of wireless communications. These new technologies have enabled the inclusion of the wireless channel as part of the optimization process. In order to decode polarization-space modulated RIS reflections, robust polarization state decoders are proposed using the Weiszfeld algorithm and an generalized Huber M-estimator. Additionally, QNNs are trained and evaluated for the recovery of the polarization state. Furthermore, I propose a novel 64-ary signal constellation based on scaled and shifted Eisenstein integers and generated using media-based modulation with a RIS. The waveform is received using an antenna array and decoded with complex-valued convolutional neural networks. I employ the circular cross-correlation function and a-priori knowledge of the phase angle distribution of the constellation to blindly resolve phase offsets between the transmitter and the receiver without the need for pilots or reference signals. Furthermore, the channel attenuation is determined using statistical methods exploiting that the constellation has a particular distribution of magnitudes. After resolving the phase and magnitude ambiguities, the noise power of the channel can also be estimated. Finally, I tune an Sq-estimator to robustly decode the Eisenstein waveform. / Doctor of Philosophy / This dissertation covers three novel wireless communications methods; analog coding, communications using the electromagnetic polarization and communications with a novel signal constellation. The concepts of a digital twin and extended reality have recently emerged, which require a massive amount of sensor data to be transmitted with low latency and high reliability. Contemporary digital communication systems are highly complex with high reliability at the expense of high latency. In order to reduce the complexity and hence latency, I propose to use an analog coding scheme that directly maps the sensor data to the wireless channel. Furthermore, I propose the use of neural networks for decoding at the receiver, hence using the name neural receiver. I employ various data types in the neural receivers hence leveraging the mathematical structure of the data in order to achieve exceptionally high performance. Another key contribution here is the mapping of the analog codes to the frequency domain enabling time and frequency synchronization. I also utilize robust estimation theory to significantly improve the performance and reliability of the coding scheme. With the recent introduction of electromagnetic reconfigurable intelligent surfaces (RIS), a paradigm shift is currently taking place in the world of wireless communications. These new technologies have enabled the inclusion of the wireless channel as part of the optimization process. Therefore, I propose to use the polarization state of the electromagnetic wave to convey information over the channel, where the polarization is determined using a RIS. As with the analog codes, I also extensively employ various methods of robust estimation to improve the performance of the recovery of the polarization at the receiver. Finally, I propose a novel communications signal constellation generated by a RIS that allows for equal probability of error at the receiver. Traditional communication systems utilize reference symbols for synchronization. In this work, I utilize statistical methods and the known distributions of the properties of the transmitted signal to synchronize without reference symbols. This is referred to as blind channel estimation. The reliability of the third communications method is enhanced using a state-of-the-art robust estimation method.
|
58 |
Highly Robust and Efficient Estimators of Multivariate Location and Covariance with Applications to Array Processing and Financial Portfolio OptimizationFishbone, Justin Adam 21 December 2021 (has links)
Throughout stochastic data processing fields, mean and covariance matrices are commonly employed for purposes such as standardizing multivariate data through decorrelation. For practical applications, these matrices are usually estimated, and often, the data used for these estimates are non-Gaussian or may be corrupted by outliers or impulsive noise. To address this, robust estimators should be employed.
However, in signal processing, where complex-valued data are common, the robust estimation techniques currently employed, such as M-estimators, provide limited robustness in the multivariate case. For this reason, this dissertation extends, to the complex-valued domain, the high-breakdown-point class of multivariate estimators called S-estimators. This dissertation defines S-estimators in the complex-valued context, and it defines their properties for complex-valued data.
One major shortcoming of the leading high-breakdown-point multivariate estimators, such as the Rocke S-estimator and the smoothed hard rejection MM-estimator, is that they lack statistical efficiency at non-Gaussian distributions, which are common with real-world applications. This dissertation proposes a new tunable S-estimator, termed the Sq-estimator, for the general class of elliptically symmetric distributions—a class containing many common families such as the multivariate Gaussian, K-, W-, t-, Cauchy, Laplace, hyperbolic, variance gamma, and normal inverse Gaussian distributions.
This dissertation demonstrates the diverse applicability and performance benefits of the Sq-estimator through theoretical analysis, empirical simulation, and the processing of real-world data. Through analytical and empirical means, the Sq-estimator is shown to generally provide higher maximum efficiency than the leading maximum-breakdown estimators, and it is also shown to generally be more stable with respect to initial conditions. To illustrate the theoretical benefits of the Sq for complex-valued applications, the efficiencies and influence functions of adaptive minimum variance distortionless response (MVDR) beamformers based on S- and M-estimators are compared. To illustrate the finite-sample performance benefits of the Sq-estimator, empirical simulation results of multiple signal classification (MUSIC) direction-of-arrival estimation are explored. Additionally, the optimal investment of real-world stock data is used to show the practical performance benefits of the Sq-estimator with respect to robustness to extreme events, estimation efficiency, and prediction performance. / Doctor of Philosophy / Throughout stochastic processing fields, mean and covariance matrices are commonly employed for purposes such as standardizing multivariate data through decorrelation. For practical applications, these matrices are usually estimated, and often, the data used for these estimates are non-normal or may be corrupted by outliers or large sporadic noise. To address this, estimators should be employed that are robust to these conditions.
However, in signal processing, where complex-valued data are common, the robust estimation techniques currently employed provide limited robustness in the multivariate case. For this reason, this dissertation extends, to the complex-valued domain, the highly robust class of multivariate estimators called S-estimators. This dissertation defines S-estimators in the complex-valued context, and it defines their properties for complex-valued data.
One major shortcoming of the leading highly robust multivariate estimators is that they may require unreasonably large numbers of samples (i.e. they may have low statistical efficiency) in order to provide good estimates at non-normal distributions, which are common with real-world applications. This dissertation proposes a new tunable S-estimator, termed the Sq-estimator, for the general class of elliptically symmetric distributions—a class containing many common families such as the multivariate Gaussian, K-, W-, t-, Cauchy, Laplace, hyperbolic, variance gamma, and normal inverse Gaussian distributions.
This dissertation demonstrates the diverse applicability and performance benefits of the Sq-estimator through theoretical analysis, empirical simulation, and the processing of real-world data. Through analytical and empirical means, the Sq-estimator is shown to generally provide higher maximum efficiency than the leading highly robust estimators, and its solutions are also shown to generally be less sensitive to initial conditions. To illustrate the theoretical benefits of the Sq-estimator for complex-valued applications, the statistical efficiencies and robustness of adaptive beamformers based on various estimators are compared. To illustrate the finite-sample performance benefits of the Sq-estimator, empirical simulation results of signal direction-of-arrival estimation are explored. Additionally, the optimal investment of real-world stock data is used to show the practical performance benefits of the Sq-estimator with respect to robustness to extreme events, estimation efficiency, and prediction performance.
|
59 |
Robust Identification, Estimation, and Control of Electric Power Systems using the Koopman Operator-Theoretic FrameworkNetto, Marcos 19 February 2019 (has links)
The study of nonlinear dynamical systems via the spectrum of the Koopman operator has emerged as a paradigm shift, from the Poincaré's geometric picture that centers the attention on the evolution of states, to the Koopman operator's picture that focuses on the evolution of observables. The Koopman operator-theoretic framework rests on the idea of lifting the states of a nonlinear dynamical system to a higher dimensional space; these lifted states are referred to as the Koopman eigenfunctions. To determine the Koopman eigenfunctions, one performs a nonlinear transformation of the states by relying on the so-called observables, that is, scalar-valued functions of the states. In other words, one executes a change of coordinates from the state space to another set of coordinates, which are denominated Koopman canonical coordinates. The variables defined on these intrinsic coordinates will evolve linearly in time, despite the underlying system being nonlinear. Since the Koopman operator is linear, it is natural to exploit its spectral properties. In fact, the theory surrounding the spectral properties of linear operators has well-known implications in electric power systems. Examples include small-signal stability analysis and direct methods for transient stability analysis based on the Lyapunov function. From the applications' standpoint, this framework based on the Koopman operator is attractive because it is capable of revealing linear and nonlinear modes by only applying well-established tools that have been developed for linear systems. With the challenges associated with the high-dimensionality and increasing uncertainties in the power systems models, researchers and practitioners are seeking alternative modeling approaches capable of incorporating information from measurements. This is fueled by an increasing amount of data made available by the wide-scale deployment of measuring devices such as phasor measurement units and smart meters. Along these lines, the Koopman operator theory is a promising framework for the integration of data analysis into our mathematical knowledge and is bringing an exciting perspective to the community. The present dissertation reports on the application of the Koopman operator for identification, estimation, and control of electric power systems. A dynamic state estimator based on the Koopman operator has been developed and compares favorably against model-based approaches, in particular for centralized dynamic state estimation. Also, a data-driven method to compute participation factors for nonlinear systems based on Koopman mode decomposition has been developed; it generalizes the original definition of participation factors under certain conditions. / PHD / Electric power systems are complex, large-scale, and given the bidirectional causality between economic growth and electricity consumption, they are constantly being expanded. In the U.S., some of the electric power grid facilities date back to the 1880s, and this aging system is operating at its capacity limits. In addition, the international pressure for sustainability is driving an unprecedented deployment of renewable energy sources into the grid. Unlike the case of other primary sources of electric energy such as coal and nuclear, the electricity generated from renewable energy sources is strongly influenced by the weather conditions, which are very challenging to forecast even for short periods of time. Within this context, the mathematical models that have aided engineers to design and operate electric power grids over the past decades are falling short when uncertainties are incorporated to the models of such high-dimensional systems. Consequently, researchers are investigating alternative data-driven approaches. This is not only motivated by the need to overcome the above challenges, but it is also fueled by the increasing amount of data produced by today’s powerful computational resources and experimental apparatus. In power systems, a massive amount of data will be available thanks to the deployment of measuring devices called phasor measurement units. Along these lines, the Koopman operator theory is a promising framework for the integration of data analysis into our mathematical knowledge, and is bringing an exciting perspective on the treatment of high-dimensional systems that lie in the forefront of science and technology. In the research work reported in this dissertation, the Koopman operator theory has been exploited to seek for solutions to some of the challenges that are threatening the safe, reliable, and efficient operation of electric power systems.
|
60 |
ROBUST INFERENCE FOR HETEROGENEOUS TREATMENT EFFECTS WITH APPLICATIONS TO NHANES DATARan Mo (20329047) 10 January 2025 (has links)
<p dir="ltr">Estimating the conditional average treatment effect (CATE) using data from the National Health and Nutrition Examination Survey (NHANES) provides valuable insights into the heterogeneous</p><p dir="ltr">impacts of health interventions across diverse populations, facilitating public health strategies that consider individual differences in health behaviors and conditions. However, estimating CATE with NHANES data face challenges often encountered in observational studies, such as outliers, heavy-tailed error distributions, skewed data, model misspecification, and the curse of dimensionality. To address these challenges, this dissertation presents three consecutive studies that thoroughly explore robust methods for estimating heterogeneous treatment effects. </p><p dir="ltr">The first study introduces an outlier-resistant estimation method by incorporating M-estimation, replacing the \(L_2\) loss in the traditional inverse propensity weighting (IPW) method with a robust loss function. To assess the robustness of our approach, we investigate its influence function and breakdown point. Additionally, we derive the asymptotic properties of the proposed estimator, enabling valid inference for the proposed outlier-resistant estimator of CATE.</p><p dir="ltr">The method proposed in the first study relies on a symmetric assumption which is commonly required by standard outlier-resistant methods. To remove this assumption while maintaining </p><p dir="ltr">unbiasedness, the second study employs the adaptive Huber loss, which dynamically adjusts the robustification parameter based on the sample size to achieve optimal tradeoff between bias and robustness. The robustification parameter is explicitly derived from theoretical results, making it unnecessary to rely on time-consuming data-driven methods for its selection.</p><p dir="ltr">We also derive concentration and Berry-Esseen inequalities to precisely quantify the convergence rates as well as finite sample performance.</p><p dir="ltr">In both previous studies, the propensity scores were estimated parametrically, which is sensitive to model misspecification issues. The third study extends the robust estimator from our first </p><p dir="ltr">project by plugging in a kernel-based nonparametric estimation of the propensity score with sufficient dimension reduction (SDR). Specifically, we adopt a robust minimum average variance estimation (rMAVE) for the central mean space under the potential outcome framework. Together with higher-order kernels, the resulting CATE estimation gains enhanced efficiency.</p><p dir="ltr">In all three studies, the theoretical results are derived, and confidence intervals are constructed for inference based on these findings. The properties of the proposed estimators are verified through extensive simulations. Additionally, applying these methods to NHANES data validates the estimators' ability to handle diverse and contaminated datasets, further demonstrating their effectiveness in real-world scenarios.</p><p><br></p>
|
Page generated in 0.13 seconds