• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 25
  • 13
  • 12
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Robust Speech Filter And Voice Encoder Parameter Estimation using the Phase-Phase Correlator

Azad, Abul K. 08 November 2019 (has links)
In recent years, linear prediction voice encoders have become very efficient in terms of computing execution time and channel bandwidth usage while providing, in the absence of im- pulsive noise, natural sounding synthetic speech signals. This good performance has been achieved via the use of a maximum likelihood parameter estimation of an auto-regressive model of order ten that best fits the speech signal under the assumption that the signal and the noise are Gaussian stochastic processes. However, this method breaks down in the presence of impulse noise, which is common in practice, resulting in harsh or non-intelligible audio signals. In this paper, we propose a robust estimator of correlation, the Phase-Phase correlator that is able to cope with impulsive noise. Utilizing this correlator, we develop a Robust Mixed Excitation Linear Prediction encoder that provides improved audio quality for voiced, unvoiced, and transition speech segments. This is achieved by applying a statistical test to robust Mahalanobis distances for identifying the outliers in the corrupted speech signal, which are then replaced with filtered signals. Simulation results reveal that the proposed method outperforms in variance, bias, and breakdown point three other robust approaches based on the arcsin law, the polarity coincidence correlator, and the median- of-ratio estimator without sacrificing the encoder bandwidth efficiency and the compression gain while remaining compatible with real-time applications. Furthermore, in the presence of impulsive noise, the proposed speech encoder speech perceptual quality also outperforms the state of the art in terms of mean opinion score. / Doctor of Philosophy / Impulsive noise is a natural phenomenon in everyday experience. Impulsive noise can be analogous to discontinuities or a drastic change in natural progressions of events. Specifically in this research the disrupting events can occur in signals such as speech, power transmission, stock market, communication systems, etc. Sudden power outage due to lighting, maintenance or other catastrophic events are some of the reasons why we may experience performance degradation in our electronic devices. Another example of impulsive noise is when we play an old damaged vinyl records, which results in annoying clicking sounds. At the time instance of each click, the true music or speech or simply the audible waveform is completely destroyed. Other examples of impulse noise is a sudden crash in the stock market; a sudden dive in the market can destroy the regression and future predictions. Unfortunately, in the presence of impulsive noise, classical methods methods are unable to filter out the impulse corruptions. The intended filtering objective of this dissertation is specific, but not limited, to speech signal processing. Specifically, research different filter model to determine the optimum method of eliminating impulsive noise in speech. Note, that the optimal filter model is different for time series signal model such as speech, stock market, power systems, etc. In our studies we have shown that our speech filter method outperforms the state of the art algorithms. Another major contribution of our research is in speech compression algorithm that is robust to impulse noise in speech. In digital signal processing, a compression method entails in representing the same signal with less data and yet convey the the same same message as the original signal. For example, human auditory system can produce sounds in the range of approximately 60 Hz and 3500 Hz, another word speech can occupy approximately 4000 Hz in frequency space. So the challenge is, can we compress speech in one of half of that space, or even less. This is a very attractive proposition because frequency space is limited but the wireless service providers desires to service as many users as possible without sacrificing quality and ultimately maximize the bottom line. Encoding impulse corrupted speech produces harsh quality of synthesized audio. We have shown if the encoding is done with the proposed method, synthesized audio quality is far superior to the sate of the art.
82

多元自迴歸條件異質變異數之模型設定研究

欉清全, Genius Tung Unknown Date (has links)
經濟理論明白揭示,在不確定下,金融性資產的選擇不僅要考慮其未來報 酬率的平均值,更需將風險程度納入決策過程中。而最佳風險測度為預測 誤差的變異數(Variance of Forec ast Error)。傳統實証方法均視變異 數為固定常數,實無法掌握變異數具有條件異質性的特點。為了到達此目 的,Engle(1982) 提出向量自迴歸條件異質變異數(ARCH)模型,此模型假 定條件變異數不再是固定常數而是過去干擾項平方的線型函數,為實証方 法上一項偉大的突破。在考慮多個變數的聯立動態體系中,由於跨方程式 間可以互相提供額外的訊息,往往可以增加估計的效率性,直覺上比單變 數的設定更能掌握資料的實際情形。故往後的學者便提出了多元自迴歸條 件異質變異數(Multivariate ARCH) 模型,此一模型亦有其缺點存在,因 其待估計參數過多,形成自由度嚴重減少,將導致估計值缺乏效率性。所 以如何利用可獲得的有限資料對模型進行更有效率的估計方式,此為研究 Multivaria te ARCH的重要課題。本文將對Multivariate ARCH做一系列 的介紹,並利用VAR 的貝氏方法對參數進行估計。而多元因素AR CH模型 也是探討的重點。
83

Algorithmes stochastiques pour la statistique robuste en grande dimension / Stochastic algorithms for robust statistics in high dimension

Godichon-Baggioni, Antoine 17 June 2016 (has links)
Cette thèse porte sur l'étude d'algorithmes stochastiques en grande dimension ainsi qu'à leur application en statistique robuste. Dans la suite, l'expression grande dimension pourra aussi bien signifier que la taille des échantillons étudiés est grande ou encore que les variables considérées sont à valeurs dans des espaces de grande dimension (pas nécessairement finie). Afin d'analyser ce type de données, il peut être avantageux de considérer des algorithmes qui soient rapides, qui ne nécessitent pas de stocker toutes les données, et qui permettent de mettre à jour facilement les estimations. Dans de grandes masses de données en grande dimension, la détection automatique de points atypiques est souvent délicate. Cependant, ces points, même s'ils sont peu nombreux, peuvent fortement perturber des indicateurs simples tels que la moyenne ou la covariance. On va se concentrer sur des estimateurs robustes, qui ne sont pas trop sensibles aux données atypiques. Dans une première partie, on s'intéresse à l'estimation récursive de la médiane géométrique, un indicateur de position robuste, et qui peut donc être préférée à la moyenne lorsqu'une partie des données étudiées est contaminée. Pour cela, on introduit un algorithme de Robbins-Monro ainsi que sa version moyennée, avant de construire des boules de confiance non asymptotiques et d'exhiber leurs vitesses de convergence $L^{p}$ et presque sûre.La deuxième partie traite de l'estimation de la "Median Covariation Matrix" (MCM), qui est un indicateur de dispersion robuste lié à la médiane, et qui, si la variable étudiée suit une loi symétrique, a les mêmes sous-espaces propres que la matrice de variance-covariance. Ces dernières propriétés rendent l'étude de la MCM particulièrement intéressante pour l'Analyse en Composantes Principales Robuste. On va donc introduire un algorithme itératif qui permet d'estimer simultanément la médiane géométrique et la MCM ainsi que les $q$ principaux vecteurs propres de cette dernière. On donne, dans un premier temps, la forte consistance des estimateurs de la MCM avant d'exhiber les vitesses de convergence en moyenne quadratique.Dans une troisième partie, en s'inspirant du travail effectué sur les estimateurs de la médiane et de la "Median Covariation Matrix", on exhibe les vitesses de convergence presque sûre et $L^{p}$ des algorithmes de gradient stochastiques et de leur version moyennée dans des espaces de Hilbert, avec des hypothèses moins restrictives que celles présentes dans la littérature. On présente alors deux applications en statistique robuste: estimation de quantiles géométriques et régression logistique robuste.Dans la dernière partie, on cherche à ajuster une sphère sur un nuage de points répartis autour d'une sphère complète où tronquée. Plus précisément, on considère une variable aléatoire ayant une distribution sphérique tronquée, et on cherche à estimer son centre ainsi que son rayon. Pour ce faire, on introduit un algorithme de gradient stochastique projeté et son moyenné. Sous des hypothèses raisonnables, on établit leurs vitesses de convergence en moyenne quadratique ainsi que la normalité asymptotique de l'algorithme moyenné. / This thesis focus on stochastic algorithms in high dimension as well as their application in robust statistics. In what follows, the expression high dimension may be used when the the size of the studied sample is large or when the variables we consider take values in high dimensional spaces (not necessarily finite). In order to analyze these kind of data, it can be interesting to consider algorithms which are fast, which do not need to store all the data, and which allow to update easily the estimates. In large sample of high dimensional data, outliers detection is often complicated. Nevertheless, these outliers, even if they are not many, can strongly disturb simple indicators like the mean and the covariance. We will focus on robust estimates, which are not too much sensitive to outliers.In a first part, we are interested in the recursive estimation of the geometric median, which is a robust indicator of location which can so be preferred to the mean when a part of the studied data is contaminated. For this purpose, we introduce a Robbins-Monro algorithm as well as its averaged version, before building non asymptotic confidence balls for these estimates, and exhibiting their $L^{p}$ and almost sure rates of convergence.In a second part, we focus on the estimation of the Median Covariation Matrix (MCM), which is a robust dispersion indicator linked to the geometric median. Furthermore, if the studied variable has a symmetric law, this indicator has the same eigenvectors as the covariance matrix. This last property represent a real interest to study the MCM, especially for Robust Principal Component Analysis. We so introduce a recursive algorithm which enables us to estimate simultaneously the geometric median, the MCM, and its $q$ main eigenvectors. We give, in a first time, the strong consistency of the estimators of the MCM, before exhibiting their rates of convergence in quadratic mean.In a third part, in the light of the work on the estimates of the median and of the Median Covariation Matrix, we exhibit the almost sure and $L^{p}$ rates of convergence of averaged stochastic gradient algorithms in Hilbert spaces, with less restrictive assumptions than in the literature. Then, two applications in robust statistics are given: estimation of the geometric quantiles and application in robust logistic regression.In the last part, we aim to fit a sphere on a noisy points cloud spread around a complete or truncated sphere. More precisely, we consider a random variable with a truncated spherical distribution, and we want to estimate its center as well as its radius. In this aim, we introduce a projected stochastic gradient algorithm and its averaged version. We establish the strong consistency of these estimators as well as their rates of convergence in quadratic mean. Finally, the asymptotic normality of the averaged algorithm is given.
84

Robustness and preferences in combinatorial optimization

Hites, Romina 15 December 2005 (has links)
In this thesis, we study robust combinatorial problems with interval data. We introduce several new measures of robustness in response to the drawbacks of existing measures of robustness. The idea of these new measures is to ensure that the solutions are satisfactory for the decision maker in all scenarios, including the worst case scenario. Therefore, we have introduced a threshold over the worst case costs, in which above this threshold, solutions are no longer satisfactory for the decision maker. It is, however, important to consider other criteria than just the worst case.<p>Therefore, in each of these new measures, a second criteria is used to evaluate the performance of the solution in other scenarios such as the best case one. <p><p>We also study the robust deviation p-elements problem. In fact, we study when this solution is equal to the optimal solution in the scenario where the cost of each element is the midpoint of its corresponding interval. <p><p>Then, we finally formulate the robust combinatorial problem with interval data as a bicriteria problem. We also integrate the decision maker's preferences over certain types of solutions into the model. We propose a method that uses these preferences to find the set of solutions that are never preferred by any other solution. We call this set the final set. <p><p>We study the properties of the final sets from a coherence point of view and from a robust point of view. From a coherence point of view, we study necessary and sufficient conditions for the final set to be monotonic, for the corresponding preferences to be without cycles, and for the set to be stable.<p>Those that do not satisfy these properties are eliminated since we believe these properties to be essential. We also study other properties such as the transitivity of the preference and indifference relations and more. We note that many of our final sets are included in one another and some are even intersections of other final sets. From a robust point of view, we compare our final sets with different measures of robustness and with the first- and second-degree stochastic dominance. We show which sets contain all of these solutions and which only contain these types of solutions. Therefore, when the decision maker chooses his preferences to find the final set, he knows what types of solutions may or may not be in the set.<p><p>Lastly, we implement this method and apply it to the Robust Shortest Path Problem. We look at how this method performs using different types of randomly generated instances. <p> / Doctorat en sciences, Orientation recherche opérationnelle / info:eu-repo/semantics/nonPublished
85

Tail behaviour analysis and robust regression meets modern methodologies

Wang, Bingling 11 March 2024 (has links)
Diese Arbeit stellt Modelle und Methoden vor, die für robuste Statistiken und ihre Anwendungen in verschiedenen Bereichen entwickelt wurden. Kapitel 2 stellt einen neuartigen Partitionierungs-Clustering-Algorithmus vor, der auf Expectiles basiert. Der Algorithmus bildet Cluster, die sich an das Endverhalten der Clusterverteilungen anpassen und sie dadurch robuster machen. Das Kapitel stellt feste Tau-Clustering- und adaptive Tau-Clustering-Schemata und ihre Anwendungen im Kryptowährungsmarkt und in der Bildsegmentierung vor. In Kapitel 3 wird ein faktorerweitertes dynamisches Modell vorgeschlagen, um das Tail-Verhalten hochdimensionaler Zeitreihen zu analysieren. Dieses Modell extrahiert latente Faktoren, die durch Extremereignisse verursacht werden, und untersucht ihre Wechselwirkung mit makroökonomischen Variablen mithilfe des VAR-Modells. Diese Methodik ermöglicht Impuls-Antwort-Analysen, Out-of-Sample-Vorhersagen und die Untersuchung von Netzwerkeffekten. Die empirische Studie stellt den signifikanten Einfluss von durch finanzielle Extremereignisse bedingten Faktoren auf makroökonomische Variablen während verschiedener Wirtschaftsperioden dar. Kapitel 4 ist eine Pilotanalyse zu Non Fungible Tokens (NFTs), insbesondere CryptoPunks. Der Autor untersucht die Clusterbildung zwischen digitalen Assets mithilfe verschiedener Visualisierungstechniken. Die durch CNN- und UMAP-Regression identifizierten Cluster werden mit Preisen und Merkmalen von CryptoPunks in Verbindung gebracht. Kapitel 5 stellt die Konstruktion eines Preisindex namens Digital Art Index (DAI) für den NFT-Kunstmarkt vor. Der Index wird mithilfe hedonischer Regression in Kombination mit robusten Schätzern für die Top-10-Liquid-NFT-Kunstsammlungen erstellt. Es schlägt innovative Verfahren vor, nämlich Huberisierung und DCS-t-Filterung, um abweichende Preisbeobachtungen zu verarbeiten und einen robusten Index zu erstellen. Darüber hinaus werden Preisdeterminanten des NFT-Marktes analysiert. / This thesis provides models and methodologies developed on robust statistics and their applications in various domains. Chapter 2 presents a novel partitioning clustering algorithm based on expectiles. The algorithm forms clusters that adapt to the tail behavior of the cluster distributions, making them more robust. The chapter introduces fixed tau-clustering and adaptive tau-clustering schemes and their applications in crypto-currency market and image segmentation. In Chapter 3 a factor augmented dynamic model is proposed to analyse tail behavior of high-dimensional time series. This model extracts latent factors driven by tail events and examines their interaction with macroeconomic variables using VAR model. This methodology enables impulse-response analysis, out-of-sample predictions, and the study of network effects. The empirical study presents significant impact of financial tail event driven factors on macroeconomic variables during different economic periods. Chapter 4 is a pilot analysis on Non Fungible Tokens (NFTs) specifically CryptoPunks. The author investigates clustering among digital assets using various visualization techniques. The clusters identified through regression CNN and UMAP are associated with prices and traits of CryptoPunks. Chapter 5 introduces the construction of a price index called the Digital Art Index (DAI) for the NFT art market. The index is created using hedonic regression combined with robust estimators on the top 10 liquid NFT art collections. It proposes innovative procedures, namely Huberization and DCS-t filtering, to handle outlying price observations and create a robust index. Furthermore, it analyzes price determinants of the NFT market.

Page generated in 0.1105 seconds