• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 88
  • 54
  • 34
  • 14
  • 13
  • 12
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 485
  • 86
  • 71
  • 59
  • 56
  • 55
  • 50
  • 48
  • 48
  • 45
  • 45
  • 44
  • 41
  • 40
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Development of a Block Processing Carrier to Noise Ratio Estimator for the Global Positioning System

Sayre, Michelle Marie 10 December 2003 (has links)
No description available.
92

Robust Registration of ToF and RGB-D Camera Point Clouds / Robust registrering av punktmoln från ToF och RGB-D kamera

Chen, Shuo January 2021 (has links)
This thesis presents a comparison of M-estimator, BLAVE, and RANSAC method in point clouds registration. The comparison is performed empirically by applying all the estimators on a simulated data added with noise plus gross errors, ToF data and RGB-D data. The RANSAC method is the fastest and most robust estimator from the comparison. The 2D feature extracting methods Harris corner detector, SIFT and SURF and 3D extracting method ISS are compared in the real-world scene data as well. SIFT algorithm is proven to have extracted the most feature points with accurate features among all the extracting methods in different data. In the end, ICP algorithm is used to refine the registration result based on the estimation of initial transform. / Denna avhandling presenterar en jämförelse av tre metoder för registrering av punktmoln: M-estimator, BLAVE och RANSAC. Jämförelsen utfördes empiriskt genom att använda alla metoder på simulerad data med brus och grova fel samt på ToF - och RGB-D -data. Tester visade att RANSAC-metoden är den snabbaste och mest robusta metoden.  Vi har även jämfört tre metoder för extrahering av features från 2D-bilder: Harris hörndetektor, SIFT och SURF och en 3D extraheringsmetod ISS. Denna jämförelse utfördes md hjälp av verkliga data. SIFT -algoritmen har visat sig fungera bäst bland alla extraheringsmetoder: den har extraherat flesta features med högst precision. I slutändan användes ICP-algoritmen för att förfina registreringsresultatet baserat på uppskattningen av initial transformering.
93

Highly Robust and Efficient Estimators of Multivariate Location and Covariance with Applications to Array Processing and Financial Portfolio Optimization

Fishbone, Justin Adam 21 December 2021 (has links)
Throughout stochastic data processing fields, mean and covariance matrices are commonly employed for purposes such as standardizing multivariate data through decorrelation. For practical applications, these matrices are usually estimated, and often, the data used for these estimates are non-Gaussian or may be corrupted by outliers or impulsive noise. To address this, robust estimators should be employed. However, in signal processing, where complex-valued data are common, the robust estimation techniques currently employed, such as M-estimators, provide limited robustness in the multivariate case. For this reason, this dissertation extends, to the complex-valued domain, the high-breakdown-point class of multivariate estimators called S-estimators. This dissertation defines S-estimators in the complex-valued context, and it defines their properties for complex-valued data. One major shortcoming of the leading high-breakdown-point multivariate estimators, such as the Rocke S-estimator and the smoothed hard rejection MM-estimator, is that they lack statistical efficiency at non-Gaussian distributions, which are common with real-world applications. This dissertation proposes a new tunable S-estimator, termed the Sq-estimator, for the general class of elliptically symmetric distributions—a class containing many common families such as the multivariate Gaussian, K-, W-, t-, Cauchy, Laplace, hyperbolic, variance gamma, and normal inverse Gaussian distributions. This dissertation demonstrates the diverse applicability and performance benefits of the Sq-estimator through theoretical analysis, empirical simulation, and the processing of real-world data. Through analytical and empirical means, the Sq-estimator is shown to generally provide higher maximum efficiency than the leading maximum-breakdown estimators, and it is also shown to generally be more stable with respect to initial conditions. To illustrate the theoretical benefits of the Sq for complex-valued applications, the efficiencies and influence functions of adaptive minimum variance distortionless response (MVDR) beamformers based on S- and M-estimators are compared. To illustrate the finite-sample performance benefits of the Sq-estimator, empirical simulation results of multiple signal classification (MUSIC) direction-of-arrival estimation are explored. Additionally, the optimal investment of real-world stock data is used to show the practical performance benefits of the Sq-estimator with respect to robustness to extreme events, estimation efficiency, and prediction performance. / Doctor of Philosophy / Throughout stochastic processing fields, mean and covariance matrices are commonly employed for purposes such as standardizing multivariate data through decorrelation. For practical applications, these matrices are usually estimated, and often, the data used for these estimates are non-normal or may be corrupted by outliers or large sporadic noise. To address this, estimators should be employed that are robust to these conditions. However, in signal processing, where complex-valued data are common, the robust estimation techniques currently employed provide limited robustness in the multivariate case. For this reason, this dissertation extends, to the complex-valued domain, the highly robust class of multivariate estimators called S-estimators. This dissertation defines S-estimators in the complex-valued context, and it defines their properties for complex-valued data. One major shortcoming of the leading highly robust multivariate estimators is that they may require unreasonably large numbers of samples (i.e. they may have low statistical efficiency) in order to provide good estimates at non-normal distributions, which are common with real-world applications. This dissertation proposes a new tunable S-estimator, termed the Sq-estimator, for the general class of elliptically symmetric distributions—a class containing many common families such as the multivariate Gaussian, K-, W-, t-, Cauchy, Laplace, hyperbolic, variance gamma, and normal inverse Gaussian distributions. This dissertation demonstrates the diverse applicability and performance benefits of the Sq-estimator through theoretical analysis, empirical simulation, and the processing of real-world data. Through analytical and empirical means, the Sq-estimator is shown to generally provide higher maximum efficiency than the leading highly robust estimators, and its solutions are also shown to generally be less sensitive to initial conditions. To illustrate the theoretical benefits of the Sq-estimator for complex-valued applications, the statistical efficiencies and robustness of adaptive beamformers based on various estimators are compared. To illustrate the finite-sample performance benefits of the Sq-estimator, empirical simulation results of signal direction-of-arrival estimation are explored. Additionally, the optimal investment of real-world stock data is used to show the practical performance benefits of the Sq-estimator with respect to robustness to extreme events, estimation efficiency, and prediction performance.
94

Robust Speech Filter And Voice Encoder Parameter Estimation using the Phase-Phase Correlator

Azad, Abul K. 08 November 2019 (has links)
In recent years, linear prediction voice encoders have become very efficient in terms of computing execution time and channel bandwidth usage while providing, in the absence of im- pulsive noise, natural sounding synthetic speech signals. This good performance has been achieved via the use of a maximum likelihood parameter estimation of an auto-regressive model of order ten that best fits the speech signal under the assumption that the signal and the noise are Gaussian stochastic processes. However, this method breaks down in the presence of impulse noise, which is common in practice, resulting in harsh or non-intelligible audio signals. In this paper, we propose a robust estimator of correlation, the Phase-Phase correlator that is able to cope with impulsive noise. Utilizing this correlator, we develop a Robust Mixed Excitation Linear Prediction encoder that provides improved audio quality for voiced, unvoiced, and transition speech segments. This is achieved by applying a statistical test to robust Mahalanobis distances for identifying the outliers in the corrupted speech signal, which are then replaced with filtered signals. Simulation results reveal that the proposed method outperforms in variance, bias, and breakdown point three other robust approaches based on the arcsin law, the polarity coincidence correlator, and the median- of-ratio estimator without sacrificing the encoder bandwidth efficiency and the compression gain while remaining compatible with real-time applications. Furthermore, in the presence of impulsive noise, the proposed speech encoder speech perceptual quality also outperforms the state of the art in terms of mean opinion score. / Doctor of Philosophy / Impulsive noise is a natural phenomenon in everyday experience. Impulsive noise can be analogous to discontinuities or a drastic change in natural progressions of events. Specifically in this research the disrupting events can occur in signals such as speech, power transmission, stock market, communication systems, etc. Sudden power outage due to lighting, maintenance or other catastrophic events are some of the reasons why we may experience performance degradation in our electronic devices. Another example of impulsive noise is when we play an old damaged vinyl records, which results in annoying clicking sounds. At the time instance of each click, the true music or speech or simply the audible waveform is completely destroyed. Other examples of impulse noise is a sudden crash in the stock market; a sudden dive in the market can destroy the regression and future predictions. Unfortunately, in the presence of impulsive noise, classical methods methods are unable to filter out the impulse corruptions. The intended filtering objective of this dissertation is specific, but not limited, to speech signal processing. Specifically, research different filter model to determine the optimum method of eliminating impulsive noise in speech. Note, that the optimal filter model is different for time series signal model such as speech, stock market, power systems, etc. In our studies we have shown that our speech filter method outperforms the state of the art algorithms. Another major contribution of our research is in speech compression algorithm that is robust to impulse noise in speech. In digital signal processing, a compression method entails in representing the same signal with less data and yet convey the the same same message as the original signal. For example, human auditory system can produce sounds in the range of approximately 60 Hz and 3500 Hz, another word speech can occupy approximately 4000 Hz in frequency space. So the challenge is, can we compress speech in one of half of that space, or even less. This is a very attractive proposition because frequency space is limited but the wireless service providers desires to service as many users as possible without sacrificing quality and ultimately maximize the bottom line. Encoding impulse corrupted speech produces harsh quality of synthesized audio. We have shown if the encoding is done with the proposed method, synthesized audio quality is far superior to the sate of the art.
95

Robust estimation of the number of components for mixtures of linear regression

Meng, Li January 1900 (has links)
Master of Science / Department of Statistics / Weixin Yao / In this report, we investigate a robust estimation of the number of components in the mixture of regression models using trimmed information criterion. Compared to the traditional information criterion, the trimmed criterion is robust and not sensitive to outliers. The superiority of the trimmed methods in comparison with the traditional information criterion methods is illustrated through a simulation study. A real data application is also used to illustrate the effectiveness of the trimmed model selection methods.
96

Robust fitting of mixture of factor analyzers using the trimmed likelihood estimator

Yang, Li January 1900 (has links)
Master of Science / Department of Statistics / Weixin Yao / Mixtures of factor analyzers have been popularly used to cluster the high dimensional data. However, the traditional estimation method is based on the normality assumptions of random terms and thus is sensitive to outliers. In this article, we introduce a robust estimation procedure of mixtures of factor analyzers using the trimmed likelihood estimator (TLE). We use a simulation study and a real data application to demonstrate the robustness of the trimmed estimation procedure and compare it with the traditional normality based maximum likelihood estimate.
97

Reduced Complexity Viterbi Decoders for SOQPSK Signals over Multipath Channels

Kannappa, Sandeep Mavuduru 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / High data rate communication between airborne vehicles and ground stations over the bandwidth constrained Aeronautical Telemetry channel is attributed to the development of bandwidth efficient Advanced Range Telemetry (ARTM) waveforms. This communication takes place over a multipath channel consisting of two components - a line of sight and one or more ground reflected paths which result in frequency selective fading. We concentrate on the ARTM SOQPSKTG transmit waveform suite and decode information bits using the reduced complexity Viterbi algorithm. Two different methodologies are proposed to implement reduced complexity Viterbi decoders in multipath channels. The first method jointly equalizes the channel and decodes the information bits using the reduced complexity Viterbi algorithm while the second method utilizes the minimum mean square error equalizer prior to applying the Viterbi decoder. An extensive numerical study is performed in comparing the performance of the above methodologies. We also demonstrate the performance gain offered by our reduced complexity Viterbi decoders over the existing linear receiver. In the numerical study, both perfect and estimated channel state information are considered.
98

EMPIRICAL PROCESSES AND ROC CURVES WITH AN APPLICATION TO LINEAR COMBINATIONS OF DIAGNOSTIC TESTS

Chirila, Costel 01 January 2008 (has links)
The Receiver Operating Characteristic (ROC) curve is the plot of Sensitivity vs. 1- Specificity of a quantitative diagnostic test, for a wide range of cut-off points c. The empirical ROC curve is probably the most used nonparametric estimator of the ROC curve. The asymptotic properties of this estimator were first developed by Hsieh and Turnbull (1996) based on strong approximations for quantile processes. Jensen et al. (2000) provided a general method to obtain regional confidence bands for the empirical ROC curve, based on its asymptotic distribution. Since most biomarkers do not have high enough sensitivity and specificity to qualify for good diagnostic test, a combination of biomarkers may result in a better diagnostic test than each one taken alone. Su and Liu (1993) proved that, if the panel of biomarkers is multivariate normally distributed for both diseased and non-diseased populations, then the linear combination, using Fisher's linear discriminant coefficients, maximizes the area under the ROC curve of the newly formed diagnostic test, called the generalized ROC curve. In this dissertation, we will derive the asymptotic properties of the generalized empirical ROC curve, the nonparametric estimator of the generalized ROC curve, by using the empirical processes theory as in van der Vaart (1998). The pivotal result used in finding the asymptotic behavior of the proposed nonparametric is the result on random functions which incorporate estimators as developed by van der Vaart (1998). By using this powerful lemma we will be able to decompose an equivalent process into a sum of two other processes, usually called the brownian bridge and the drift term, via Donsker classes of functions. Using a uniform convergence rate result given by Pollard (1984), we derive the limiting process of the drift term. Due to the independence of the random samples, the asymptotic distribution of the generalized empirical ROC process will be the sum of the asymptotic distributions of the decomposed processes. For completeness, we will first re-derive the asymptotic properties of the empirical ROC curve in the univariate case, using the same technique described before. The methodology is used to combine biomarkers in order to discriminate lung cancer patients from normals.
99

Measuring the Mass of a Galaxy: An evaluation of the performance of Bayesian mass estimates using statistical simulation

Eadie, Gwendolyn 27 March 2013 (has links)
This research uses a Bayesian approach to study the biases that may occur when kinematic data is used to estimate the mass of a galaxy. Data is simulated from the Hernquist (1990) distribution functions (DFs) for velocity dispersions of the isotropic, constant anisotropic, and anisotropic Osipkov (1979) and Merritt (1985) type, and then analysed using the isotropic Hernquist model. Biases are explored when i) the model and data come from the same DF, ii) the model and data come from the same DF but tangential velocities are unknown, iii) the model and data come from different DFs, and iv) the model and data come from different DFs and the tangential velocities are unknown. Mock observations are also created from the Gauthier (2006) simulations and analysed with the isotropic Hernquist model. No bias was found in situation (i), a slight positive bias was found in (ii), a negative bias was found in (iii), and a large positive bias was found in (iv). The mass estimate of the Gauthier system when tangential velocities were unknown was nearly correct, but the mass profile was not described well by the isotropic Hernquist model. When the Gauthier data was analysed with the tangential velocities, the mass of the system was overestimated. The code created for the research runs three parallel Markov Chains for each data set, uses the Gelman-Rubin statistic to assess convergence, and combines the converged chains into a single sample of the posterior distribution for each data set. The code also includes two ways to deal with nuisance parameters. One is to marginalize over the nuisance parameter at every step in the chain, and the other is to sample the nuisance parameters using a hybrid-Gibbs sampler. When tangential velocities, v(t), are unobserved in the analyses above, they are sampled as nuisance parameters in the Markov Chain. The v(t) estimates from the Markov chains did a poor job of estimating the true tangential velocities. However, the posterior samples of v(t) proved to be useful, as the estimates of the tangential velocities helped explain the biases discovered in situations (i)-(iv) above. / Thesis (Master, Physics, Engineering Physics and Astronomy) -- Queen's University, 2013-03-26 17:23:14.643
100

Evaluating the Use of Ridge Regression and Principal Components in Propensity Score Estimators under Multicollinearity

Gripencrantz, Sarah January 2014 (has links)
Multicollinearity can be present in the propensity score model when estimating average treatment effects (ATEs). In this thesis, logistic ridge regression (LRR) and principal components logistic regression (PCLR) are evaluated as an alternative to ML estimation of the propensity score model. ATE estimators based on weighting (IPW), matching and stratification are assessed in a Monte Carlo simulation study to evaluate LRR and PCLR. Further, an empirical example of using LRR and PCLR on real data under multicollinearity is provided. Results from the simulation study reveal that under multicollinearity and in small samples, the use of LRR reduces bias in the matching estimator, compared to ML. In large samples PCLR yields lowest bias, and typically was found to have the lowest MSE in all estimators. PCLR matched ML in bias under IPW estimation and in some cases had lower bias. The stratification estimator was heavily biased compared to matching and IPW but both bias and MSE improved as PCLR was applied, and for some cases under LRR. The specification with PCLR in the empirical example was usually most sensitive as a strongly correlated covariate was included in the propensity score model.

Page generated in 0.0278 seconds