• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Distribution of Cotton Fiber Length

Belmasrour, Rachid 05 August 2010 (has links)
By testing a fiber beard, certain cotton fiber length parameters can be obtained rapidly. This is the method used by the High Volume Instrument (HVI). This study is aimed to explore the approaches and obtain the inference of length distributions of HVI beard sam- ples in order to develop new methods that can help us find the distribution of original fiber lengths and further improve HVI length measurements. At first, the mathematical functions were searched for describing three different types of length distributions related to the beard method as used in HVI: cotton fiber lengths of the original fiber population before picked by the HVI Fibrosampler, fiber lengths picked by HVI Fibrosampler, and fiber beard's pro-jecting portion that is actually scanned by HVI. Eight sets of cotton samples with a wide range of fiber lengths are selected and tested on the Advanced Fiber Information System (AFIS). The measured single fiber length data is used for finding the underlying theoreti-cal length distributions, and thus can be considered as the population distributions of the cotton samples. In addition, fiber length distributions by number and by weight are dis- cussed separately. In both cases a mixture of two Weibull distributions shows a good fit to their fiber length data. To confirm the findings, Kolmogorov-Smirnov goodness-of-fit tests were conducted. Furthermore, various length parameters such as Mean Length (ML) and Upper Half Mean Length (UHML) are compared between the original distribution from the experimental data and the fitted distributions. The results of these obtained fiber length distributions are discussed by using Partial Least Squares (PLS) regression, where the dis-tribution of the original fiber length from the distribution of the projected one is estimated.
2

Využití kvantilových funkcí při kostrukci pravděpodobnostních modelů mzdových rozdělení / An Application of Quantile Functions in Probability Model Constructions of Wage Distributions

Pavelka, Roman January 2004 (has links)
Over the course of years from 1995 to 2008 was acquired by Average Earnings Information System under the professional gestation of the Czech Republic Ministry of Labor and Social Affairs wage and personal data by individual employees. Thanks to the fact that in this statistical survey are collected wage and personal data by concrete employed persons it is possible to obtain a wage distribution, so it how this wages spread out among individual employees. Values that wages can be assumed in whole wage interval are not deterministical but they result from interactions of many random influences. The wage is necessary due to this randomness considered as random quantity with its probability density function. This spreading of wages in all labor market segments is described a wage distribution. Even though a representation of a high-income employee category is evidently small, one's incomes markedly affect statistically itemized average wage level and particularly the whole wage file variability. So wage employee collections are distinguished by the averaged wage that exceeds wages of a major employee mass and the high variability due to great wage heterogeneity. A general approach to distribution of earning modeling under current heterogeneity conditions don't permit to fit by some chosen distribution function or probably density function. This leads to the idea to apply some quantile approach with statistical modeling, i.e. to model an earning distribution with some appropriate inverse distributional function. The probability modeling by generalized or compound forms of quantile functions enables better to characterize a wage distribution, which distinguishes by high asymmetry and wage heterogeneity. The application of inverse distributional function as a probability model of a wage distribution can be expressed in forms of distributional mixture of partial employee's groups. All of the component distributions of this mixture model correspond to an employee's group with greater homogeneity of earnings. The partial employee's subfiles differ in parameters of their component density and in shares of this density in the total wage distribution of the wage file.
3

Analysis of circular data in the dynamic model and mixture of von Mises distributions

Lan, Tian, active 2013 10 December 2013 (has links)
Analysis of circular data becomes more and more popular in many fields of studies. In this report, I present two statistical analysis of circular data using von Mises distributions. Firstly, the maximization-expectation algorithm is reviewed and used to classify and estimate circular data from the mixture of von Mises distributions. Secondly, Forward Filtering Backward Smoothing method via particle filtering is reviewed and implemented when circular data appears in the dynamic state-space models. / text
4

Statistical identification of metabolic reactions catalyzed by gene products of unknown function

Zheng, Lianqing January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Gary L. Gadbury / High-throughput metabolite analysis is an approach used by biologists seeking to identify the functions of genes. A mutation in a gene encoding an enzyme is expected to alter the level of the metabolites which serve as the enzyme’s reactant(s) (also known as substrate) and product(s). To find the function of a mutated gene, metabolite data from a wild-type organism and a mutant are compared and candidate reactants and products are identified. The screening principle is that the concentration of reactants will be higher and the concentration of products will be lower in the mutant than in wild type. This is because the mutation reduces the reaction between the reactant and the product in the mutant organism. Based upon this principle, we suggest a method to screen the possible lipid reactant and product pairs related to a mutation affecting an unknown reaction. Some numerical facts are given for the treatment means for the lipid pairs in each treatment group, and relations between the means are found for the paired lipids. A set of statistics from the relations between the means of the lipid pairs is derived. Reactant and product lipid pairs associated with specific mutations are used to assess the results. We have explored four methods using the test statistics to obtain a list of potential reactant-product pairs affected by the mutation. The first method uses the parametric bootstrap to obtain an empirical null distribution of the test statistic and a technique to identify a family of distributions and corresponding parameter estimates for modeling the null distribution. The second method uses a mixture of normal distributions to model the empirical bootstrap null. The third method uses a normal mixture model with multiple components to model the entire distribution of test statistics from all pairs of lipids. The argument is made that, for some cases, one of the model components is that for lipid pairs affected by the mutation while the other components model the null distribution. The fourth method uses a two-way ANOVA model with an interaction term to find the relations between the mean concentrations and the role of a lipid as a reactant or product in a specific lipid pair. The goal of all methods is to identify a list of findings by false discovery techniques. Finally a simulation technique is proposed to evaluate properties of statistical methods for identifying candidate reactant-product pairs.
5

Inferência em modelos de mistura via algoritmo EM estocástico modificado / Inference on Mixture Models via Modified Stochastic EM

Assis, Raul Caram de 02 June 2017 (has links)
Apresentamos o tópico e a teoria de Modelos de Mistura de Distribuições, revendo aspectos teóricos e interpretações de tais misturas. Desenvolvemos a teoria dos modelos nos contextos de máxima verossimilhança e de inferência bayesiana. Abordamos métodos de agrupamento já existentes em ambos os contextos, com ênfase em dois métodos, o algoritmo EM estocástico no contexto de máxima verossimilhança e o Modelo de Mistura com Processos de Dirichlet no contexto bayesiano. Propomos um novo método, uma modificação do algoritmo EM Estocástico, que pode ser utilizado para estimar os parâmetros de uma mistura de componentes enquanto permite soluções com número distinto de grupos. / We present the topics and theory of Mixture Models in a context of maximum likelihood and Bayesian inferece. We approach clustering methods in both contexts, with emphasis on the stochastic EM algorithm and the Dirichlet Process Mixture Model. We propose a new method, a modified stochastic EM algorithm, which can be used to estimate the parameters of a mixture model and the number of components.
6

Inferência em modelos de mistura via algoritmo EM estocástico modificado / Inference on mixture models via modified stochastic EM algorithm

Assis, Raul Caram de 02 June 2017 (has links)
Submitted by Ronildo Prado (ronisp@ufscar.br) on 2017-08-22T14:32:30Z No. of bitstreams: 1 DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-22T14:32:38Z (GMT) No. of bitstreams: 1 DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-22T14:32:44Z (GMT) No. of bitstreams: 1 DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) / Made available in DSpace on 2017-08-22T14:32:50Z (GMT). No. of bitstreams: 1 DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) Previous issue date: 2017-06-02 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / We present the topics and theory of Mixture Models in a context of maximum likelihood and Bayesian inferece. We approach clustering methods in both contexts, with emphasis on the stochastic EM algorithm and the Dirichlet Process Mixture Model. We propose a new method, a modified stochastic EM algorithm, which can be used to estimate the parameters of a mixture model and the number of components. / Apresentamos o tópico e a teoria de Modelos de Mistura de Distribuições, revendo aspectos teóricos e interpretações de tais misturas. Desenvolvemos a teoria dos modelos nos contextos de máxima verossimilhança e de inferência bayesiana. Abordamos métodos de agrupamento já existentes em ambos os contextos, com ênfase em dois métodos, o algoritmo EM estocástico no contexto de máxima verossimilhança e o Modelo de Mistura com Processos de Dirichlet no contexto bayesiano. Propomos um novo método, uma modificação do algoritmo EM Estocástico, que pode ser utilizado para estimar os parâmetros de uma mistura de componentes enquanto permite soluções com número distinto de grupos.
7

Inferência em modelos de mistura via algoritmo EM estocástico modificado / Inference on Mixture Models via Modified Stochastic EM

Raul Caram de Assis 02 June 2017 (has links)
Apresentamos o tópico e a teoria de Modelos de Mistura de Distribuições, revendo aspectos teóricos e interpretações de tais misturas. Desenvolvemos a teoria dos modelos nos contextos de máxima verossimilhança e de inferência bayesiana. Abordamos métodos de agrupamento já existentes em ambos os contextos, com ênfase em dois métodos, o algoritmo EM estocástico no contexto de máxima verossimilhança e o Modelo de Mistura com Processos de Dirichlet no contexto bayesiano. Propomos um novo método, uma modificação do algoritmo EM Estocástico, que pode ser utilizado para estimar os parâmetros de uma mistura de componentes enquanto permite soluções com número distinto de grupos. / We present the topics and theory of Mixture Models in a context of maximum likelihood and Bayesian inferece. We approach clustering methods in both contexts, with emphasis on the stochastic EM algorithm and the Dirichlet Process Mixture Model. We propose a new method, a modified stochastic EM algorithm, which can be used to estimate the parameters of a mixture model and the number of components.
8

Securities trading in multiple markets : the Chinese perspective

Wang, Chaoyan January 2009 (has links)
This thesis studies the trading of the Chinese American Depositories Receipts (ADRs) and their respective underlying H shares issued in Hong Kong. The primary intention of this work is to investigate the arbitrage opportunity between the Chinese ADRs and their underlying H shares. This intention is motivated by the market observation that hedge funds are often in the top 10 shareholders of these Chinese ADRs. We start our study from the origin place of the Chinese ADRs, China’s stock market. We pay particular attention to the ownership structure of the Chinese listed firms, because part of the Chinese ADRs also listed A shares (exclusively owned by the Chinese citizens) in Shanghai. We also pay attention to the market microstructures and trading costs of the three China-related stock exchanges. We then proceed to empirical study on the Chinese ADRs arbitrage possibility by comparing the return distribution of two securities; we find these two securities are different in their return distributions, and which is due to the inequality in the higher moments, such as skewness, and kurtosis. Based on the law of one price and the weak-form efficient markets, the prices of identical securities that are traded in different markets should be similar, as any deviation in their prices will be arbitraged away. Given the intrinsic property of the ADRs that a convenient transferable mechanism exists between the ADRs and their underlying shares which makes arbitrage easy; the different return distributions of the ADRs and the underlying shares address the question that if arbitrage is costly that the equilibrium price of the security achieved in each market is affected mainly by its local market where the Chinese ADRs/the underlying Hong Kong shares are traded, such as the demand for and the supply of the stock in each market, the different market microstructures and market mechanisms which produce different trading costs in each market, and different noise trading arose from asymmetric information across multi-markets. And because of these trading costs, noise trading risk, and liquidity risk, the arbitrage opportunity between the two markets would not be exploited promptly. This concern then leads to the second intention of this work that how noise trading and trading cost comes into playing the role of determining asset prices, which makes us to empirically investigate the comovement effect, as well as liquidity risk. With regards to these issues, we progress into two strands, firstly, we test the relationship between the price differentials of the Chinese ADRs and the market return of the US and Hong Kong market. This test is to examine the comovement effect which is caused by asynchronous noise trading. We find the US market impact dominant over Hong Kong market impact, though both markets display significant impact on the ADRs’ price differentials. Secondly, we analyze the liquidity effect on the Chinese ADRs and their underlying Hong Kong shares by using two proxies to measure illiquidity cost and liquidity risk. We find significant positive relation between return and trading volume which is used to capture liquidity risk. This finding leads to a deeper study on the relationship between trading volume and return volatility from market microstructure perspective. In order to verify a proper model to describe return volatility, we carry out test to examine the heteroscedasticity condition, and proceed to use two asymmetric GARCH models to capture leverage effect. We find the Chinese ADRs and their underlying Hong Kong shares have different patterns in the leverage effect as modeled by these two asymmetric GARCH models, and this finding from another angle explains why these two securities are unequal in the higher moments of their return distribution. We then test two opposite hypotheses about volume-volatility relation. The Mixture of Distributions Hypothesis suggests a positive relation between contemporaneous volume and volatility, while the Sequential Information Arrival Hypothesis indicates a causality relationship between lead-lag volume and volatility. We find supportive evidence for the Sequential Information Arrival Hypothesis but not for the Mixture of Distributions Hypothesis.
9

Aplicações em meta-análise sob um enfoque bayesiano usando dados médicos.

Pissini, Carla Fernanda 21 March 2006 (has links)
Made available in DSpace on 2016-06-02T20:06:11Z (GMT). No. of bitstreams: 1 DissCFP.pdf: 956101 bytes, checksum: e21a11e1dc4754a5751b0b0840943082 (MD5) Previous issue date: 2006-03-21 / Financiadora de Estudos e Projetos / In this work, we consider the use of Meta-analysis with a Bayesian approach. Meta-analysis is a statistical technique that combines the results of di¤erent independent studies with purpose to find general conclusions. This term was introduced by Glass (1976) and it has been used when the number of studies about some research project is small. Usually, the models for Meta-analysis assume a large number of parameters and the Bayesian approach using MCMC (Markov Chain Monte Carlo) methods is a good alternative to combine information of independent studies, to obtain accutrate inferences about a specified treatment. As illustration, we consider real medical data sets on di¤erent studies, in which, we consider fixed and random e¤ects models. We also assume mixture of normal distributions for the error of the models. Another application is considered with educational data. / Neste trabalho, consideramos o uso de Meta-análise sob um enfoque Bayesiano. Meta-análise é uma técnica estatística que combina resultados de diversos estudos in-dependentes, com o propósito de descrever conclusões gerais. Este termo foi introduzido por Glass (1976) usado quando o número de estudos sobre alguma pesquisa científica é pequeno. Os modelos propostos para Meta-análise usualmente assumem muitos parâmetros e o enfoque Bayesiano com MCMC (Monte Carlo em Cadeias de Markov) é uma alternativa apropriada para combinar informações de estudos independentes. O uso de modelos Bayesianos hierárquicos permite combinações de vários estudos independentes, para a obtenção de inferências precisas sobre um determinado tratamento. Como ilustração numérica consideramos conjuntos de dados médicos de diferentes estudos e, na análise, utilizamos modelos de efeitos fixos e aleatórios e mistura de distribuições normais para o erro do modelo de regressão. Em uma outra aplicação relacionamos Meta-análise e Educação, através do efeito da espectativa do professor associada ao QI dos estudantes.
10

Essays on Volatility Risk, Asset Returns and Consumption-Based Asset Pricing

Kim, Young Il 25 June 2008 (has links)
No description available.

Page generated in 0.1156 seconds