• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 19
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 86
  • 86
  • 25
  • 19
  • 18
  • 17
  • 17
  • 16
  • 13
  • 13
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Testing the compatibility of constraints for parameters of a geodetic adjustment model

Lehmann, Rüdiger, Neitzel, Frank 06 August 2014 (has links) (PDF)
Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints. The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case, that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is illustrated by the example of a double levelled line. / Geodätische Ausgleichungsmodelle werden oft auf eine Weise formuliert, bei der die Modellparameter bestimmte Bedingungsgleichungen zu erfüllen haben. Die normierten Lagrange-Multiplikatoren wurden bisher als Maß für den ausgeübten Zwang verwendet, und zwar so, dass wenn einer von ihnen betragsmäßig eine bestimmte Schwelle übersteigt, dann ist davon auszugehen, dass die zugehörige Bedingungsgleichung nicht mit den Beobachtungen und den restlichen Bedingungsgleichungen kompatibel ist. Wir zeigen, dass diese und ähnliche Maße als Teststatistiken eines Likelihood-Quotiententests der statistischen Hypothese, dass einige Bedingungsgleichungen in diesem Sinne inkompatibel sind, abgeleitet werden können. Das wurde bisher nur für spezielle Bedingungsgleichungen getan (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). Wir starten vom einfachsten Fall, dass die gesamte Menge der Bedingungsgleichungen getestet werden muss, und gelangen zu dem fortgeschrittenen Problem, dass jede Bedingungsgleichung individuell zu testen ist. Jeder Test wird sowohl für bekannte, wie auch für unbekannte a priori Varianzfaktoren ausgearbeitet. Die zugehörigen Verteilungen werden sowohl unter der Null- wie auch unter der Alternativhypthese abgeleitet. Die Theorie wird am Beispiel einer Doppelnivellementlinie illustriert.
62

Distribuição normal assimétrica para dados de expressão gênica

Gomes, Priscila da Silva 17 April 2009 (has links)
Made available in DSpace on 2016-06-02T20:06:02Z (GMT). No. of bitstreams: 1 2390.pdf: 3256865 bytes, checksum: 7ad1acbefc5f29dddbaad3f14dbcef7c (MD5) Previous issue date: 2009-04-17 / Financiadora de Estudos e Projetos / Microarrays technologies are used to measure the expression levels of a large amount of genes or fragments of genes simultaneously in diferent situations. This technology is useful to determine genes that are responsible for genetic diseases. A common statistical methodology used to determine whether a gene g has evidences to diferent expression levels is the t-test which requires the assumption of normality for the data (Saraiva, 2006; Baldi & Long, 2001). However this assumption sometimes does not agree with the nature of the analyzed data. In this work we use the skew-normal distribution described formally by Azzalini (1985), which has the normal distribution as a particular case, in order to relax the assumption of normality. Considering a frequentist approach we made a simulation study to detect diferences between the gene expression levels in situations of control and treatment through the t-test. Another simulation was made to examine the power of the t-test when we assume an asymmetrical model for the data. Also we used the likelihood ratio test to verify the adequability of an asymmetrical model for the data. / Os microarrays são ferramentas utilizadas para medir os níveis de expressão de uma grande quantidade de genes ou fragmentos de genes simultaneamente em situações variadas. Com esta ferramenta é possível determinar possíveis genes causadores de doenças de origem genética. Uma abordagem estatística comumente utilizada para determinar se um gene g apresenta evidências para níveis de expressão diferentes consiste no teste t, que exige a suposição de normalidade aos dados (Saraiva, 2006; Baldi & Long, 2001). No entanto, esta suposição pode não condizer com a natureza dos dados analisados. Neste trabalho, será utilizada a distribuição normal assimétrica descrita formalmente por Azzalini (1985), que tem a distribuição normal como caso particular, com o intuito de flexibilizar a suposição de normalidade. Considerando a abordagem clássica, é realizado um estudo de simulação para detectar diferenças entre os níveis de expressão gênica em situações de controle e tratamento através do teste t, também é considerado um estudo de simulação para analisar o poder do teste t quando é assumido um modelo assimétrico para o conjunto de dados. Também é realizado o teste da razão de verossimilhança, para verificar se o ajuste de um modelo assimétrico aos dados é adequado.
63

Testes em modelos weibull na forma estendida de Marshall-Olkin

Magalh?es, Felipe Henrique Alves 28 December 2011 (has links)
Made available in DSpace on 2015-03-03T15:28:32Z (GMT). No. of bitstreams: 1 FelipeHAM_DISSERT.pdf: 2307848 bytes, checksum: c94e3d62e5fe54424d6cbe1491c8d85d (MD5) Previous issue date: 2011-12-28 / Universidade Federal do Rio Grande do Norte / In survival analysis, the response is usually the time until the occurrence of an event of interest, called failure time. The main characteristic of survival data is the presence of censoring which is a partial observation of response. Associated with this information, some models occupy an important position by properly fit several practical situations, among which we can mention the Weibull model. Marshall-Olkin extended form distributions other a basic generalization that enables greater exibility in adjusting lifetime data. This paper presents a simulation study that compares the gradient test and the likelihood ratio test using the Marshall-Olkin extended form Weibull distribution. As a result, there is only a small advantage for the likelihood ratio test / Em an?lise de sobreviv?ncia, a vari?vel resposta e, geralmente, o tempo at? a ocorr?ncia de um evento de interesse, denominado tempo de falha, e a principal caracter?stica de dados de sobreviv?ncia e a presen?a de censura, que ? a observa??o parcial da resposta. Associados a essas informa??es, alguns modelos ocupam uma posi??o de destaque por sua comprovada adequa??o a v?rias situa??es pr?ticas, entre os quais ? poss?vel citar o modelo Weibull. Distribui??es na forma estendida de Marshall-Olkin oferecem uma generaliza??o de distribui??es b?sicas que permitem uma flexibilidade maior no ajuste de dados de tempo de vida. Este trabalho apresenta um estudo de simula??o que compara duas estat?sticas de teste, a da Raz?o de Verossimilhan?as e a Gradiente, utilizando a distribui??o Weibull em sua forma estendida de Marshall-Olkin. Como resultado, verifica-se apenas uma pequena vantagem para estat?stica da Raz?o de Verossimilhancas
64

Estimação e teste de hipótese baseados em verossimilhanças perfiladas / "Point estimation and hypothesis test based on profile likelihoods"

Michel Ferreira da Silva 20 May 2005 (has links)
Tratar a função de verossimilhança perfilada como uma verossimilhança genuína pode levar a alguns problemas, como, por exemplo, inconsistência e ineficiência dos estimadores de máxima verossimilhança. Outro problema comum refere-se à aproximação usual da distribuição da estatística da razão de verossimilhanças pela distribuição qui-quadrado, que, dependendo da quantidade de parâmetros de perturbação, pode ser muito pobre. Desta forma, torna-se importante obter ajustes para tal função. Vários pesquisadores, incluindo Barndorff-Nielsen (1983,1994), Cox e Reid (1987,1992), McCullagh e Tibshirani (1990) e Stern (1997), propuseram modificações à função de verossimilhança perfilada. Tais ajustes consistem na incorporação de um termo à verossimilhança perfilada anteriormente à estimação e têm o efeito de diminuir os vieses da função escore e da informação. Este trabalho faz uma revisão desses ajustes e das aproximações para o ajuste de Barndorff-Nielsen (1983,1994) descritas em Severini (2000a). São apresentadas suas derivações, bem como suas propriedades. Para ilustrar suas aplicações, são derivados tais ajustes no contexto da família exponencial biparamétrica. Resultados de simulações de Monte Carlo são apresentados a fim de avaliar os desempenhos dos estimadores de máxima verossimilhança e dos testes da razão de verossimilhanças baseados em tais funções. Também são apresentadas aplicações dessas funções de verossimilhança em modelos não pertencentes à família exponencial biparamétrica, mais precisamente, na família de distribuições GA0(alfa,gama,L), usada para modelar dados de imagens de radar, e no modelo de Weibull, muito usado em aplicações da área da engenharia denominada confiabilidade, considerando dados completos e censurados. Aqui também foram obtidos resultados numéricos a fim de avaliar a qualidade dos ajustes sobre a verossimilhança perfilada, analogamente às simulações realizadas para a família exponencial biparamétrica. Vale mencionar que, no caso da família de distribuições GA0(alfa,gama,L), foi avaliada a aproximação da distribuição da estatística da razão de verossimilhanças sinalizada pela distribuição normal padrão. Além disso, no caso do modelo de Weibull, vale destacar que foram derivados resultados distribucionais relativos aos estimadores de máxima verossimilhança e às estatísticas da razão de verossimilhanças para dados completos e censurados, apresentados em apêndice. / The profile likelihood function is not genuine likelihood function, and profile maximum likelihood estimators are typically inefficient and inconsistent. Additionally, the null distribution of the likelihood ratio test statistic can be poorly approximated by the asymptotic chi-squared distribution in finite samples when there are nuisance parameters. It is thus important to obtain adjustments to the likelihood function. Several authors, including Barndorff-Nielsen (1983,1994), Cox and Reid (1987,1992), McCullagh and Tibshirani (1990) and Stern (1997), have proposed modifications to the profile likelihood function. They are defined in a such a way to reduce the score and information biases. In this dissertation, we review several profile likelihood adjustments and also approximations to the adjustments proposed by Barndorff-Nielsen (1983,1994), also described in Severini (2000a). We present derivations and the main properties of the different adjustments. We also obtain adjustments for likelihood-based inference in the two-parameter exponential family. Numerical results on estimation and testing are provided. We also consider models that do not belong to the two-parameter exponential family: the GA0(alfa,gama,L) family, which is commonly used to model image radar data, and the Weibull model, which is useful for reliability studies, the latter under both noncensored and censored data. Again, extensive numerical results are provided. It is noteworthy that, in the context of the GA0(alfa,gama,L) model, we have evaluated the approximation of the null distribution of the signalized likelihood ratio statistic by the standard normal distribution. Additionally, we have obtained distributional results for the Weibull case concerning the maximum likelihood estimators and the likelihood ratio statistic both for noncensored and censored data.
65

MELHORAMENTOS INFERENCIAIS NO MODELO BETA-SKEW-T-EGARCH / INFERENTIAL IMPROVEMENTS OF BETA-SKEW-T-EGARCH MODEL

Muller, Fernanda Maria 25 February 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The Beta-Skew-t-EGARCH model was recently proposed in literature to model the volatility of financial returns. The inferences over the model parameters are based on the maximum likelihood method. The maximum likelihood estimators present good asymptotic properties; however, in finite sample sizes they can be considerably biased. Monte Carlo simulations were used to evaluate the finite sample performance of point estimators. Numerical results indicated that the maximum likelihood estimators of some parameters are biased in sample sizes smaller than 3,000. Thus, bootstrap bias correction procedures were considered to obtain more accurate estimators in small samples. Better quality of forecasts was observed when the model with bias-corrected estimators was considered. In addition, we propose a likelihood ratio test to assist in the selection of the Beta-Skew-t-EGARCH model with one or two volatility components. The numerical evaluation of the two-component test showed distorted null rejection rates in sample sizes smaller than or equal to 1,000. To improve the performance of the proposed test in small samples, the bootstrap-based likelihood ratio test and the bootstrap Bartlett correction were considered. The bootstrap-based test exhibited the closest null rejection rates to the nominal values. The evaluation results of the two-component tests showed their practical usefulness. Finally, an application to the log-returns of the German stock index of the proposed methods was presented. / O modelo Beta-Skew-t-EGARCH foi recentemente proposto para modelar a volatilidade de retornos financeiros. A estimação dos parâmetros do modelo é feita via máxima verossimilhança. Esses estimadores possuem boas propriedades assintóticas, mas em amostras de tamanho finito eles podem ser consideravelmente viesados. Com a finalidade de avaliar as propriedades dos estimadores, em amostras de tamanho finito, realizou-se um estudo de simulações de Monte Carlo. Os resultados numéricos indicam que os estimadores de máxima verossimilhança de alguns parâmetros do modelo são viesados em amostras de tamanho inferior a 3000. Para obter estimadores pontuais mais acurados foram consideradas correções de viés via o método bootstrap. Verificou-se que os estimadores corrigidos apresentaram menor viés relativo percentual. Também foi observada melhor qualidade das previsões quando o modelo com estimadores corrigidos são considerados. Para auxiliar na seleção entre o modelo Beta-Skew-t-EGARCH com um ou dois componentes de volatilidade foi apresentado um teste da razão de verossimilhanças. A avaliação numérica do teste de dois componentes proposto demonstrou taxas de rejeição nula distorcidas em tamanhos amostrais menores ou iguais a 1000. Para melhorar o desempenho do teste foram consideradas a correção bootstrap e a correção de Bartlett bootstrap. Os resultados numéricos indicam a utilidade prática dos testes de dois componentes propostos. O teste bootstrap exibiu taxas de rejeição nula mais próximas dos valores nominais. Ao final do trabalho foi realizada uma aplicação dos testes de dois componentes e do modelo Beta-Skew-t-EGARCH, bem como suas versões corrigidas, a dados do índice de mercado da Alemanha.
66

Neuronal Dissimilarity Indices that Predict Oddball Detection in Behaviour

Vaidhiyan, Nidhin Koshy January 2016 (has links) (PDF)
Our vision is as yet unsurpassed by machines because of the sophisticated representations of objects in our brains. This representation is vastly different from a pixel-based representation used in machine storages. It is this sophisticated representation that enables us to perceive two faces as very different, i.e, they are far apart in the “perceptual space”, even though they are close to each other in their pixel-based representations. Neuroscientists have proposed distances between responses of neurons to the images (as measured in macaque monkeys) as a quantification of the “perceptual distance” between the images. Let us call these neuronal dissimilarity indices of perceptual distances. They have also proposed behavioural experiments to quantify these perceptual distances. Human subjects are asked to identify, as quickly as possible, an oddball image embedded among multiple distractor images. The reciprocal of the search times for identifying the oddball is taken as a measure of perceptual distance between the oddball and the distractor. Let us call such estimates as behavioural dissimilarity indices. In this thesis, we describe a decision-theoretic model for visual search that suggests a connection between these two notions of perceptual distances. In the first part of the thesis, we model visual search as an active sequential hypothesis testing problem. Our analysis suggests an appropriate neuronal dissimilarity index which correlates strongly with the reciprocal of search times. We also consider a number of alternative possibilities such as relative entropy (Kullback-Leibler divergence), the Chernoff entropy and the L1-distance associated with the neuronal firing rate profiles. We then come up with a means to rank the various neuronal dissimilarity indices based on how well they explain the behavioural observations. Our proposed dissimilarity index does better than the other three, followed by relative entropy, then Chernoff entropy and then L1 distance. In the second part of the thesis, we consider a scenario where the subject has to find an oddball image, but without any prior knowledge of the oddball and distractor images. Equivalently, in the neuronal space, the task for the decision maker is to find the image that elicits firing rates different from the others. Here, the decision maker has to “learn” the underlying statistics and then make a decision on the oddball. We model this scenario as one of detecting an odd Poisson point process having a rate different from the common rate of the others. The revised model suggests a new neuronal dissimilarity index. The new dissimilarity index is also strongly correlated with the behavioural data. However, the new dissimilarity index performs worse than the dissimilarity index proposed in the first part on existing behavioural data. The degradation in performance may be attributed to the experimental setup used for the current behavioural tasks, where search tasks associated with a given image pair were sequenced one after another, thereby possibly cueing the subject about the upcoming image pair, and thus violating the assumption of this part on the lack of prior knowledge of the image pairs to the decision maker. In conclusion, the thesis provides a framework for connecting the perceptual distances in the neuronal and the behavioural spaces. Our framework can possibly be used to analyze the connection between the neuronal space and the behavioural space for various other behavioural tasks.
67

Stochastic modelling of HIV/AIDS epidemiology with TB co-infection drug reaction in South Africa

Shoko, Claris 16 July 2015 (has links)
MSc (Statistics) / Department of Statistics
68

Testing the compatibility of constraints for parameters of a geodetic adjustment model

Lehmann, Rüdiger, Neitzel, Frank 06 August 2014 (has links)
Geodetic adjustment models are often set up in a way that the model parameters need to fulfil certain constraints. The normalized Lagrange multipliers have been used as a measure of the strength of constraint in such a way that if one of them exceeds in magnitude a certain threshold then the corresponding constraint is likely to be incompatible with the observations and the rest of the constraints. We show that these and similar measures can be deduced as test statistics of a likelihood ratio test of the statistical hypothesis that some constraints are incompatible in the same sense. This has been done before only for special constraints (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). We start from the simplest case, that the full set of constraints is to be tested, and arrive at the advanced case, that each constraint is to be tested individually. Every test is worked out both for a known as well as for an unknown prior variance factor. The corresponding distributions under null and alternative hypotheses are derived. The theory is illustrated by the example of a double levelled line. / Geodätische Ausgleichungsmodelle werden oft auf eine Weise formuliert, bei der die Modellparameter bestimmte Bedingungsgleichungen zu erfüllen haben. Die normierten Lagrange-Multiplikatoren wurden bisher als Maß für den ausgeübten Zwang verwendet, und zwar so, dass wenn einer von ihnen betragsmäßig eine bestimmte Schwelle übersteigt, dann ist davon auszugehen, dass die zugehörige Bedingungsgleichung nicht mit den Beobachtungen und den restlichen Bedingungsgleichungen kompatibel ist. Wir zeigen, dass diese und ähnliche Maße als Teststatistiken eines Likelihood-Quotiententests der statistischen Hypothese, dass einige Bedingungsgleichungen in diesem Sinne inkompatibel sind, abgeleitet werden können. Das wurde bisher nur für spezielle Bedingungsgleichungen getan (Teunissen in Optimization and Design of Geodetic Networks, pp. 526–547, 1985). Wir starten vom einfachsten Fall, dass die gesamte Menge der Bedingungsgleichungen getestet werden muss, und gelangen zu dem fortgeschrittenen Problem, dass jede Bedingungsgleichung individuell zu testen ist. Jeder Test wird sowohl für bekannte, wie auch für unbekannte a priori Varianzfaktoren ausgearbeitet. Die zugehörigen Verteilungen werden sowohl unter der Null- wie auch unter der Alternativhypthese abgeleitet. Die Theorie wird am Beispiel einer Doppelnivellementlinie illustriert.
69

Robust Change Detection with Unknown Post-Change Distribution

Sargun, Deniz January 2021 (has links)
No description available.
70

MITIGATION of BACKGROUNDS for the LARGE UNDERGROUND XENON DARK MATTER EXPERIMENT

Lee, Chang 03 June 2015 (has links)
No description available.

Page generated in 0.1007 seconds