• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • Tagged with
  • 9
  • 9
  • 9
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Parameter Estimation and Hypothesis Testing for the Truncated Normal Distribution with Applications to Introductory Statistics Grades

Hattaway, James T. 09 March 2010 (has links) (PDF)
The normal distribution is a commonly seen distribution in nature, education, and business. Data that are mounded or bell shaped are easily found across various fields of study. Although there is high utility with the normal distribution; often the full range can not be observed. The truncated normal distribution accounts for the inability to observe the full range and allows for inferring back to the original population. Depending on the amount of truncation, the truncated normal has several distinct shapes. A simulation study evaluating the performance of the maximum likelihood estimators and method of moment estimators is conducted and a comparison of performance is made. The α Likelihood Ratio Test (LRT) is derived for testing the null hypothesis of equal population means for truncated normal data. A simulation study evaluating the power of the LRT to detect absolute standardized differences between the two population means with small sample size was conducted and the power curves were approximated. Another simulation study evaluating the power of the LRT to detect absolute differences for testing the hypothesis with large unequal sample sizes was conducted. The α LRT was extended to a k population hypothesis test for equal population means. A simulation study examining the power of the k population LRT for detecting absolute standardized differences when one of the population means is different than the others was conducted and the power curve approximated. Stat~221 is the largest introductory statistics course at BYU serving about 4,500 students a year. Every section of Stat 221 shares common homework assignments and tests. This controls for confounding when making comparisons between sections. Historically grades have been thought to be bell shaped, but with grade inflation and other factors, the upper tail is lost because of the truncation at 100. It is reasonable to assume that grades follow a truncated normal distribution. Inference using the final grades should be done recognizing the truncation. Performance of the different Stat 221 sections was evaluated using the LRTs derived.
2

Robust Control Charts

Cetinyurek, Aysun 01 January 2007 (has links) (PDF)
ABSTRACT ROBUST CONTROL CHARTS &Ccedil / etiny&uuml / rek, Aysun M. Sc., Department of Statistics Supervisor: Dr. BariS S&uuml / r&uuml / c&uuml / Co-Supervisor: Assoc. Prof. Dr. Birdal Senoglu December 2006, 82 pages Control charts are one of the most commonly used tools in statistical process control. A prominent feature of the statistical process control is the Shewhart control chart that depends on the assumption of normality. However, violations of underlying normality assumption are common in practice. For this reason, control charts for symmetric distributions for both long- and short-tailed distributions are constructed by using least squares estimators and the robust estimators -modified maximum likelihood, trim, MAD and wave. In order to evaluate the performance of the charts under the assumed distribution and investigate robustness properties, the probability of plotting outside the control limits is calculated via Monte Carlo simulation technique.
3

The gravity model for international trade: Specification and estimation issues in the prevalence of zero flows

Krisztin, Tamás, Fischer, Manfred M. 14 August 2014 (has links) (PDF)
The gravity model for international trade is one of the most successful empirical models in trade literature. There is a long tradition to log-linearise the multiplicative model and to estimate the parameters of interest by least squares. But this practice is inappropriate for several reasons. First of all, bilateral trade flows are frequently zero and disregarding countries that do not trade with each other produces biased results. Second, log-linearisation in the presence of heteroscedasticity leads to inconsistent estimates in general. In recent years, the Poisson gravity model along with pseudo maximum likelihood estimation methods have become popular as a way of dealing with such econometric issues as arise when dealing with origin-destination flows. But the standard Poisson model specification is vulnerable to problems of overdispersion and excess zero flows. To overcome these problems, this paper presents zero-inflated extensions of the Poisson and negative binomial specifications as viable alternatives to both the log-linear and the standard Poisson specifications of the gravity model. The performance of the alternative model specifications is assessed on a real world example, where more than half of country-level trade flows are zero. (authors' abstract) / Series: Working Papers in Regional Science
4

Refinamentos assintóticos em modelos lineares generalizados heteroscedáticos / Asymptotic refinements in heteroskedastic generalized linear models

Barros, Fabiana Uchôa 07 March 2017 (has links)
Nesta tese, desenvolvemos refinamentos assintóticos em modelos lineares generalizados heteroscedásticos (Smyth, 1989). Inicialmente, obtemos a matriz de covariâncias de segunda ordem dos estimadores de máxima verossimilhança corrigidos pelos viés de primeira ordem. Com base na matriz obtida, sugerimos modificações na estatística de Wald. Posteriormente, derivamos os coeficientes do fator de correção tipo-Bartlett para a estatística do teste gradiente. Em seguida, obtemos o coeficiente de assimetria assintótico da distribuição dos estimadores de máxima verossimilhança dos parâmetros do modelo. Finalmente, exibimos o coeficiente de curtose assintótico da distribuição dos estimadores de máxima verossimilhança dos parâmetros do modelo. Analisamos os resultados obtidos através de estudos de simulação de Monte Carlo. / In this thesis, we have developed asymptotic refinements in heteroskedastic generalized linear models (Smyth, 1989). Initially, we obtain the second-order covariance matrix for the maximum likelihood estimators corrected by the bias of first-order. Based on the obtained matrix, we suggest changes in Wald statistics. In addition, we derive the coeficients of the Bartlett-type correction factor for the statistical gradient test. After, we get asymptotic skewness of the distribution of the maximum likelihood estimators of the model parameters. Finally, we show the asymptotic kurtosis coeficient of the distribution of the maximum likelihood estimators of the model parameters. Monte Carlo simulation studies are developed to evaluate the results obtained.
5

Refinamentos assintóticos em modelos lineares generalizados heteroscedáticos / Asymptotic refinements in heteroskedastic generalized linear models

Fabiana Uchôa Barros 07 March 2017 (has links)
Nesta tese, desenvolvemos refinamentos assintóticos em modelos lineares generalizados heteroscedásticos (Smyth, 1989). Inicialmente, obtemos a matriz de covariâncias de segunda ordem dos estimadores de máxima verossimilhança corrigidos pelos viés de primeira ordem. Com base na matriz obtida, sugerimos modificações na estatística de Wald. Posteriormente, derivamos os coeficientes do fator de correção tipo-Bartlett para a estatística do teste gradiente. Em seguida, obtemos o coeficiente de assimetria assintótico da distribuição dos estimadores de máxima verossimilhança dos parâmetros do modelo. Finalmente, exibimos o coeficiente de curtose assintótico da distribuição dos estimadores de máxima verossimilhança dos parâmetros do modelo. Analisamos os resultados obtidos através de estudos de simulação de Monte Carlo. / In this thesis, we have developed asymptotic refinements in heteroskedastic generalized linear models (Smyth, 1989). Initially, we obtain the second-order covariance matrix for the maximum likelihood estimators corrected by the bias of first-order. Based on the obtained matrix, we suggest changes in Wald statistics. In addition, we derive the coeficients of the Bartlett-type correction factor for the statistical gradient test. After, we get asymptotic skewness of the distribution of the maximum likelihood estimators of the model parameters. Finally, we show the asymptotic kurtosis coeficient of the distribution of the maximum likelihood estimators of the model parameters. Monte Carlo simulation studies are developed to evaluate the results obtained.
6

O processo de Poisson estendido e aplicações. / O processo de Poisson estendido e aplicações.

Salasar, Luis Ernesto Bueno 14 June 2007 (has links)
Made available in DSpace on 2016-06-02T20:05:59Z (GMT). No. of bitstreams: 1 DissLEBS.pdf: 1626270 bytes, checksum: c18112f89ed0a1eea09a198885cf2c2c (MD5) Previous issue date: 2007-06-14 / Financiadora de Estudos e Projetos / Abstract In this dissertation we will study how extended Poisson process can be applied to construct discrete probabilistic models. An Extended Poisson Process is a continuous time stochastic process with the state space being the natural numbers, it is obtained as a generalization of homogeneous Poisson process where transition rates depend on the current state of the process. From its transition rates and Chapman-Kolmogorov di¤erential equations, we can determine the probability distribution at any …xed time of the process. Conversely, given any probability distribution on the natural numbers, it is possible to determine uniquely a sequence of transition rates of an extended Poisson process such that, for some instant, the unidimensional probability distribution coincides with the provided probability distribution. Therefore, we can conclude that extended Poisson process is as a very ‡exible framework on the analysis of discrete data, since it generalizes all probabilistic discrete models. We will present transition rates of extended Poisson process which generate Poisson, Binomial and Negative Binomial distributions and determine maximum likelihood estima- tors, con…dence intervals, and hypothesis tests for parameters of the proposed models. We will also perform a bayesian analysis of such models with informative and noninformative prioris, presenting posteriori summaries and comparing these results to those obtained by means of classic inference. / Nesta dissertação veremos como o proceso de Poisson estendido pode ser aplicado à construção de modelos probabilísticos discretos. Um processo de Poisson estendido é um processo estocástico a tempo contínuo com espaço de estados igual ao conjunto dos números naturais, obtido a partir de uma generalização do processo de Poisson homogê- neo onde as taxas de transição dependem do estado atual do processo. A partir das taxas de transição e das equações diferenciais de Chapman-Kolmogorov pode-se determinar a distribuição de probabilidades para qualquer tempo …xado do processo. Reciprocamente, dada qualquer distribuição de probabilidades sobre o conjunto dos números naturais é pos- sível determinar, de maneira única, uma seqüência de taxas de transição de um processo de Poisson estendido tal que, para algum instante, a distribução unidimensional do processo coincide com a dada distribuição de probabilidades. Portanto, o processo de Poisson es- tendido se apresenta como uma ferramenta bastante ‡exível na análise de dados discretos, pois generaliza todos os modelos probabilísticos discretos. Apresentaremos as taxas de transição dos processos de Poisson estendido que ori- ginam as distribuições de Poisson, Binomial e Binomial Negativa e determinaremos os estimadores de máxima verossimilhança, intervalos de con…ança e testes de hipóteses dos parâmetros dos modelos propostos. Faremos também uma análise bayesiana destes mod- elos com prioris informativas e não informativas, apresentando os resumos a posteriori e comparando estes resultados com aqueles obtidos via inferência clássica.
7

Reconstrução de energia em calorímetros operando em alta luminosidade usando estimadores de máxima verossimilhança / Reconstrution of energy in calorimeters operating in high brigthness enviroments using maximum likelihood estimators

Paschoalin, Thiago Campos 15 March 2016 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-08-12T11:54:08Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: Isabela, verifique que no resumo há algumas palavras unidas. on 2016-08-15T13:06:32Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-08-15T13:57:16Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: separar palavras no resumo e palavras-chave on 2016-08-16T11:34:37Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-12-19T13:07:02Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Rejected by Adriana Oliveira (adriana.oliveira@ufjf.edu.br), reason: Consertar palavras unidas no resumo on 2017-02-03T12:27:10Z (GMT) / Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-02-03T12:51:52Z No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-02-03T12:54:15Z (GMT) No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) / Made available in DSpace on 2017-02-03T12:54:15Z (GMT). No. of bitstreams: 1 thiagocampospaschoalin.pdf: 3743029 bytes, checksum: f4b20678855edee77ec6c63903785d60 (MD5) Previous issue date: 2016-03-15 / Esta dissertação apresenta técnicas de processamento de sinais a fim de realizar a Estimação da energia, utilizando calorimetria de altas energias. O CERN, um dos mais importantes centros de pesquisa de física de partículas, possui o acelerador de partículas LHC, onde está inserido o ATLAS. O TileCal, importante calorímetro integrante do ATLAS, possui diversos canais de leitura, operando com altas taxas de eventos. A reconstrução da energia das partículas que interagem com este calorímetro é realizada através da estimação da amplitude do sinal gerado nos canais do mesmo. Por este motivo, a modelagem correta do ruído é importante para se desenvolver técnicas de estimação eficientes. Com o aumento da luminosidade (número de partículas que incidem no detector por unidade de tempo) no TileCal, altera-se o modelo do ruído, o que faz com que as técnicas de estimação utilizadas anteriormente apresentem uma queda de desempenho. Com a modelagem deste novo ruído como sendo uma Distribuição Lognormal, torna possível o desenvolvimento de uma nova técnica de estimação utilizando Estimadores de Máxima Verossimilhança (do inglês Maximum Likelihood Estimator MLE), aprimorando a estimação dos parâmetros e levando à uma reconstrução da energia do sinal de forma mais correta. Uma nova forma de análise da qualidade da estimação é também apresentada, se mostrando bastante eficiente e útil em ambientes de alta luminosidade. A comparação entre o método utilizado pelo CERN e o novo método desenvolvido mostrou que a solução proposta é superior em desempenho, sendo adequado o seu uso no novo cenário de alta luminosidade no qual o TileCal estará sujeito a partir de 2018. / This paper presents signal processing techniques that performs signal detection and energy estimation using calorimetry high energies. The CERN, one of the most important physics particles research center, has the LHC, that contains the ATLAS. The TileCal, important device of the ATLAS calorimeter, is the component that involves a lot of parallel channels working, involving high event rates. The reconstruction of the signal energy that interact with this calorimeter is performed through estimation of the amplitude of signal generated by this calorimter. So, accurate noise modeling is important to develop efficient estimation techniques. With high brightness in TileCal, the noise model modifies, which leads a performance drop of estimation techniques used previously. Modelling this new noise as a lognormal distribution allows the development of a new estimation technique using the MLE (Maximum Like lihood Estimation), improving parameter sestimation and leading to a more accurately reconstruction of the signal energy. A new method to analise the estimation quality is presented, wich is very effective and useful in high brightness enviroment conditions. The comparison between the method used by CERN and the new method developed revealed that the proposed solution is superior and is suitable to use in this kind of ambient that TileCal will be working from 2018.
8

MELHORAMENTOS INFERENCIAIS NO MODELO BETA-SKEW-T-EGARCH / INFERENTIAL IMPROVEMENTS OF BETA-SKEW-T-EGARCH MODEL

Muller, Fernanda Maria 25 February 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The Beta-Skew-t-EGARCH model was recently proposed in literature to model the volatility of financial returns. The inferences over the model parameters are based on the maximum likelihood method. The maximum likelihood estimators present good asymptotic properties; however, in finite sample sizes they can be considerably biased. Monte Carlo simulations were used to evaluate the finite sample performance of point estimators. Numerical results indicated that the maximum likelihood estimators of some parameters are biased in sample sizes smaller than 3,000. Thus, bootstrap bias correction procedures were considered to obtain more accurate estimators in small samples. Better quality of forecasts was observed when the model with bias-corrected estimators was considered. In addition, we propose a likelihood ratio test to assist in the selection of the Beta-Skew-t-EGARCH model with one or two volatility components. The numerical evaluation of the two-component test showed distorted null rejection rates in sample sizes smaller than or equal to 1,000. To improve the performance of the proposed test in small samples, the bootstrap-based likelihood ratio test and the bootstrap Bartlett correction were considered. The bootstrap-based test exhibited the closest null rejection rates to the nominal values. The evaluation results of the two-component tests showed their practical usefulness. Finally, an application to the log-returns of the German stock index of the proposed methods was presented. / O modelo Beta-Skew-t-EGARCH foi recentemente proposto para modelar a volatilidade de retornos financeiros. A estimação dos parâmetros do modelo é feita via máxima verossimilhança. Esses estimadores possuem boas propriedades assintóticas, mas em amostras de tamanho finito eles podem ser consideravelmente viesados. Com a finalidade de avaliar as propriedades dos estimadores, em amostras de tamanho finito, realizou-se um estudo de simulações de Monte Carlo. Os resultados numéricos indicam que os estimadores de máxima verossimilhança de alguns parâmetros do modelo são viesados em amostras de tamanho inferior a 3000. Para obter estimadores pontuais mais acurados foram consideradas correções de viés via o método bootstrap. Verificou-se que os estimadores corrigidos apresentaram menor viés relativo percentual. Também foi observada melhor qualidade das previsões quando o modelo com estimadores corrigidos são considerados. Para auxiliar na seleção entre o modelo Beta-Skew-t-EGARCH com um ou dois componentes de volatilidade foi apresentado um teste da razão de verossimilhanças. A avaliação numérica do teste de dois componentes proposto demonstrou taxas de rejeição nula distorcidas em tamanhos amostrais menores ou iguais a 1000. Para melhorar o desempenho do teste foram consideradas a correção bootstrap e a correção de Bartlett bootstrap. Os resultados numéricos indicam a utilidade prática dos testes de dois componentes propostos. O teste bootstrap exibiu taxas de rejeição nula mais próximas dos valores nominais. Ao final do trabalho foi realizada uma aplicação dos testes de dois componentes e do modelo Beta-Skew-t-EGARCH, bem como suas versões corrigidas, a dados do índice de mercado da Alemanha.
9

Ενίσχυση σημάτων μουσικής υπό το περιβάλλον θορύβου

Παπανικολάου, Παναγιώτης 20 October 2010 (has links)
Στην παρούσα εργασία επιχειρείται η εφαρμογή αλγορίθμων αποθορυβοποίησης σε σήματα μουσικής και η εξαγωγή συμπερασμάτων σχετικά με την απόδοση αυτών ανά μουσικό είδος. Η κύρια επιδίωξη είναι να αποσαφηνιστούν τα βασικά προβλήματα της ενίσχυσης ήχων και να παρουσιαστούν οι διάφοροι αλγόριθμοι που έχουν αναπτυχθεί για την επίλυση των προβλημάτων αυτών. Αρχικά γίνεται μία σύντομη εισαγωγή στις βασικές έννοιες πάνω στις οποίες δομείται η τεχνολογία ενίσχυσης ομιλίας. Στην συνέχεια εξετάζονται και αναλύονται αντιπροσωπευτικοί αλγόριθμοι από κάθε κατηγορία τεχνικών αποθορυβοποίησης, την κατηγορία φασματικής αφαίρεσης, την κατηγορία στατιστικών μοντέλων και αυτήν του υποχώρου. Για να μπορέσουμε να αξιολογήσουμε την απόδοση των παραπάνω αλγορίθμων χρησιμοποιούμε αντικειμενικές μετρήσεις ποιότητας, τα αποτελέσματα των οποίων μας δίνουν την δυνατότητα να συγκρίνουμε την απόδοση του κάθε αλγορίθμου. Με την χρήση τεσσάρων διαφορετικών μεθόδων αντικειμενικών μετρήσεων διεξάγουμε τα πειράματα εξάγοντας μια σειρά ενδεικτικών τιμών που μας δίνουν την ευχέρεια να συγκρίνουμε είτε τυχόν διαφοροποιήσεις στην απόδοση των αλγορίθμων της ίδιας κατηγορίας είτε διαφοροποιήσεις στο σύνολο των αλγορίθμων. Από την σύγκριση αυτή γίνεται εξαγωγή χρήσιμων συμπερασμάτων σχετικά με τον προσδιορισμό των παραμέτρων κάθε αλγορίθμου αλλά και με την καταλληλότητα του κάθε αλγορίθμου για συγκεκριμένες συνθήκες θορύβου και για συγκεκριμένο μουσικό είδος. / This thesis attempts to apply Noise Reduction algorithms to signals of music and draw conclusions concerning the performance of each algorithm for every musical genre. The main aims are to clarify the basic problems of sound enhancement and present the various algorithms developed for solving these problems. After a brief introduction to basic concepts on sound enhancement we examine and analyze various algorithms that have been proposed at times in the literature for speech enhancement. These algorithms can be divided into three main classes: spectral subtractive algorithms, statistical-model-based algorithms and subspace algorithms. In order to evaluate the performance of the above algorithms we use objective measures of quality, the results of which give us the opportunity to compare the performance of each algorithm. By using four different methods of objective measures to conduct the experiments we draw a set of values that facilitate us to make within-class algorithm comparisons and across-class algorithm comparisons. From these comparisons we can draw conclusions on the determination of parameters for each algorithm and the appropriateness of algorithms for specific noise conditions and music genre.

Page generated in 0.0767 seconds