• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 25
  • 13
  • 12
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Simplified inelastic analysis of notched components subjected to mechanical and thermal loads /

Raghavan, Prasanna, January 1998 (has links)
Thesis (M. Eng.), Memorial University of Newfoundland, 1998. / Bibliography: leaves 94-98.
52

On the robustness of LISREL (maximum likelihood estimation) against small sample size and non-normality

Boomsma, Anne. January 1983 (has links)
Thesis (doctoral)--Rijksuniversiteit Groningen, 1983. / Summary in Dutch. Includes bibliographical references (p. 208-233) and index.
53

Segmented regression : a robust approach /

Healey, Brian, January 2004 (has links)
Thesis (M.A.S.)--Memorial University of Newfoundland, 2004. / Restricted until October 2005. Bibliography: leaves 92-100.
54

Processos de ordem infinita estocasticamente perturbados / Processes of infinite order stochastically perturbed

Moreira, Lucas, 1984- 19 August 2018 (has links)
Orientador: Nancy Lopes Garcia / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-19T13:37:17Z (GMT). No. of bitstreams: 1 Moreira_Lucas_D.pdf: 778475 bytes, checksum: 78934b9baf39cc234f800623a7af5cdf (MD5) Previous issue date: 2012 / Resumo: Inspirados em Collet, Galves e Leonardi (2008), a motivação original deste texto é responder a seguinte questão: é possível recuperar a árvore de contextos de uma cadeia de alcance variável através de uma amostra perturbada da cadeia? Inicialmente, consideramos cadeias binárias de ordem infinita nas quais um dos símbolos pode ser modificado com uma probabilidade pequena e fixada. Provamos que as probabilidades de transição da cadeia perturbada estão uniformemente próximas das probabilidades de transição correspondentes da cadeia original se a probabilidade de contaminação é suficientemente pequena. Por meio deste resultado, fomos capazes de responder afirmativamente à pergunta inicial deste trabalho, ou seja, é possível recuperar a árvore de contextos do processo original mesmo utilizando uma amostra contamina no procedimento de estimação. Com isso, mostramos que o estimador da árvore de contextos utilizado é robusto. Em seguida, consideramos o seguinte modelo: dadas duas cadeias de alcance variável, tomando valores num mesmo alfabeto finito, a cada instante do tempo, o novo processo escolhe aleatoriamente um dos dois processos originais com uma probabilidade grande e fixa. A cadeia obtida dessa maneira pode então ser vista como uma perturbação estocástica da cadeia que está sendo escolhida com probabilidade maior. Para esse modelo, obtivemos resultados semelhantes aos obtidos para o modelo inicial / Abstract: Inspired by Collet, Galves and Leonardi (2008), the original motivation of this paper is to answer the following question: Is it possible to recover the context tree of a length variable chain range through a disturbed sample of chain? Initially consider binary chains of infinite order in which one of the symbols can be modified with a small and fixed probability. We prove that the transition probabilities of the perturbed chain are uniformly close to the corresponding transition probabilities of the original chain if the probability of contamination is small enough. Through this result, we were able to answer affirmatively to the initial question of this work, i.e., it is possible to recover the context tree of the original process using a sample contaminates the estimation procedure. With this, we show that the estimator of the context tree used is robust. Next, consider the following model: given two length variable chains, taking values in the same finite alphabet, at each instant of time, the new process randomly chooses one of the two processes with a large and fixed probability. The chain obtained with greater probability can be seen as a stochastic disturbance of the original chain. For this model, we obtained similar results to the those obtained for the initial model / Doutorado / Estatistica / Doutor em Estatística
55

Is ocean reflectance acquired by ferry passengers robust for science applications?

Yang, Yuyan 22 December 2017 (has links)
Monitoring the dynamics of the productivity of ocean water and how it affects fisheries is essential for management. It requires data on proper spatial/temporal scales, which can be provided by operational ocean colour satellites. However, accurate productivity data from ocean colour imagery is only possible with proper validation of, for instance, the atmospheric correction applied to the images. In situ water reflectance data is of great value due to the requirements for validation and it is traditionally measured with the Surface Acquisition System (SAS) solar tracker system. Recently, an application, 'HydroColor', was developed for mobile devices to acquire water reflectance data. We examine the accuracy of the water reflectance acquired by HydroColor with the help of trained and untrained citizens under different environmental conditions. We used water reflectance data acquired by SAS solar tracker and HydroColor onboard the BC ferry Queen of Oak Bay from July to September 2016. Monte Carlo permutation F-tests were used to assess whether the differences between measurements collected by SAS solar tracker and HydroColor with citizens were significant. Results showed that the HydroColor measurements collected by 447 citizens were accurate in red, green, and blue bands, as well as red/green and red/blue ratios under different environmental conditions. Piecewise models were developed for correcting HydroColor blue/green water reflectance ratios based on the SAS solar tracker measurements. In addition, we found that training and environmental conditions had impacts on the data quality. A trained citizen obtained higher quality HydroColor data especially under clear skies at noon run (12:50-2:30 pm). / Graduate
56

The Robustness of O'Brien's r Transformation to Non-Normality

Gordon, Carol J. (Carol Jean) 08 1900 (has links)
A Monte Carlo simulation technique was employed in this study to determine if the r transformation, a test of homogeneity of variance, affords adequate protection against Type I error over a range of equal sample sizes and number of groups when samples are obtained from normal and non-normal distributions. Additionally, this study sought to determine if the r transformation is more robust than Bartlett's chi-square to deviations from normality. Four populations were generated representing normal, uniform, symmetric leptokurtic, and skewed leptokurtic distributions. For each sample size (6, 12, 24, 48), number of groups (3, 4, 5, 7), and population distribution condition, the r transformation and Bartlett's chi-square were calculated. This procedure was replicated 1,000 times; the actual significance level was determined and compared to the nominal significance level of .05. On the basis of the analysis of the generated data, the following conclusions are drawn. First, the r transformation is generally robust to violations of normality when the size of the samples tested is twelve or larger. Second, in the instances where a significant difference occurred between the actual and nominal significance levels, the r transformation produced (a) conservative Type I error rates if the kurtosis of the parent population were 1.414 or less and (b) an inflated Type I error rate when the index of kurtosis was three. Third, the r transformation should not be used if sample size is smaller than twelve. Fourth, the r transformation is more robust in all instances to non-normality, but the Bartlett test is superior in controlling Type I error when samples are from a population with a normal distribution. In light of these conclusions, the r transformation may be used as a general utility test of homogeneity of variances when either the distribution of the parent population is unknown or is known to have a non-normal distribution, and the size of the equal samples is at least twelve.
57

Two Essays on High-Dimensional Robust Variable Selection and an Application to Corporate Bankruptcy Prediction

Li, Shaobo 29 October 2018 (has links)
No description available.
58

Robust state estimation in power systems

Phaniraj, Viruru 12 October 2005 (has links)
The application of robust estimation methods to the power system state estimation problem was investigated. Techniques using both nonlinear and combinatorial optimization were considered, based on the requirements that the method developed should be statistically robust, and fast enough to be used in a real-time environment. Some basic concepts from robust statistics are introduced. The various estimation methods considered are reviewed, and the implementation of the selected estimator is described. Simulation results for several IEEE test systems are included. Other applications of the proposed technique, such as leverage point identification in large sparse systems, and robust meter placement are described. / Ph. D.
59

Statistically robust Pseudo Linear Identification

Alnor, Harald 08 September 2012 (has links)
It is common to assume that the noise disturbing measuring devices is of a Gaussian nature. But this assumption is not always fulfilled. A few examples are the cases where the measurement device fails periodically, the data transmission from device to microprocessor fails or the A/D conversion fails. In these cases the noise will no longer be Gaussian distributed, but rather the noise will be a mixture of Gaussian noise and data not related to the physical process. This posses a problem for estimators derived under the Gaussian assumption, in the sense L that these estimators are likely to produce highly biased estimates in a non Gaussian environment. This thesis devises a way to robustify the Pseudo Linear Identification algorithm (PLID) which is a joint parameter and state estimator of a Kalman filter type. The PLID algorithm is originally derived under a Gaussian noise assumption. The PLID algorithm is made robust by filtering the measurements through a nonlinear odd symmetric function, called the mb function, and let the covariance updating depend on how far away the measurement is from the prediction. In the original PLID the measurements are used unfiltered in the covariance calculation. / Master of Science
60

Aspects of probabilistic modelling for data analysis

Delannay, Nicolas 23 October 2007 (has links)
Computer technologies have revolutionised the processing of information and the search for knowledge. With the ever increasing computational power, it is becoming possible to tackle new data analysis applications as diverse as mining the Internet resources, analysing drugs effects on the organism or assisting wardens with autonomous video detection techniques. Fundamentally, the principle of any data analysis task is to fit a model which encodes well the dependencies (or patterns) present in the data. However, the difficulty is precisely to define such proper model when data are noisy, dependencies are highly stochastic and there is no simple physical rule to represent them. The aim of this work is to discuss the principles, the advantages and weaknesses of the probabilistic modelling framework for data analysis. The main idea of the framework is to model dispersion of data as well as uncertainty about the model itself by probability distributions. Three data analysis tasks are presented and for each of them the discussion is based on experimental results from real datasets. The first task considers the problem of linear subspaces identification. We show how one can replace a Gaussian noise model by a Student-t noise to make the identification more robust to atypical samples and still keep the learning procedure simple. The second task is about regression applied more specifically to near-infrared spectroscopy datasets. We show how spectra should be pre-processed before entering the regression model. We then analyse the validity of the Bayesian model selection principle for this application (and in particular within the Gaussian Process formulation) and compare this principle to the resampling selection scheme. The final task considered is Collaborative Filtering which is related to applications such as recommendation for e-commerce and text mining. This task is illustrative of the way how intuitive considerations can guide the design of the model and the choice of the probability distributions appearing in it. We compare the intuitive approach with a simpler matrix factorisation approach.

Page generated in 0.1054 seconds