• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 31
  • 9
  • 7
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 157
  • 157
  • 37
  • 32
  • 32
  • 31
  • 24
  • 24
  • 23
  • 22
  • 22
  • 19
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Selection and ranking procedures based on likelihood ratios

Chotai, Jayanti January 1979 (has links)
This thesis deals with random-size subset selection and ranking procedures• • • )|(derived through likelihood ratios, mainly in terms of the P -approach.Let IT , . .. , IT, be k(> 2) populations such that IR.(i = l, . . . , k) hasJ_ K. — 12the normal distribution with unknwon mean 0. and variance a.a , where a.i i i2 . . is known and a may be unknown; and that a random sample of size n^ istaken from . To begin with, we give procedure (with tables) whichselects IT. if sup L(0;x) >c SUD L(0;X), where SÎ is the parameter space1for 0 = (0-^, 0^) ; where (with c: ß) is the set of all 0 with0. = max 0.; where L(*;x) is the likelihood function based on the total1sample; and where c is the largest constant that makes the rule satisfy theP*-condition. Then, we consider other likelihood ratios, with intuitivelyreasonable subspaces of ß, and derive several new rules. Comparisons amongsome of these rules and rule R of Gupta (1956, 1965) are made using differentcriteria; numerical for k=3, and a Monte-Carlo study for k=10.For the case when the populations have the uniform (0,0^) distributions,and we have unequal sample sizes, we consider selection for the populationwith min 0.. Comparisons with Barr and Rizvi (1966) are made. Generalizai<j<k Jtions are given.Rule R^ is generalized to densities satisfying some reasonable assumptions(mainly unimodality of the likelihood, and monotonicity of the likelihoodratio). An exponential class is considered, and the results are exemplifiedby the gamma density and the Laplace density. Extensions and generalizationsto cover the selection of the t best populations (using various requirements)are given. Finally, a discussion oil the complete ranking problem,and on the relation between subset selection based on likelihood ratios andstatistical inference under order restrictions, is given. / digitalisering@umu
52

A Study of Designs in Clinical Trials and Schedules in Operating Rooms

Hung, Wan-Ping 20 January 2011 (has links)
The design of clinical trials is one of the important problems in medical statistics. Its main purpose is to determine the methodology and the sample size required of a testing study to examine the safety and efficacy of drugs. It is also a part of the Food and Drug Administration approval process. In this thesis, we first study the comparison of the efficacy of drugs in clinical trials. We focus on the two-sample comparison of proportions to investigate testing strategies based on two-stage design. The properties and advantages of the procedures from the proposed testing designs are demonstrated by numerical results, where comparison with the classical method is made under the same sample size. A real example discussed in Cardenal et al. (1999) is provided to explain how the methods may be used in practice. Some figures are also presented to illustrate the pattern changes of the power functions of these methods. In addition, the proposed procedure is also compared with the Pocock (1997) and O¡¦Brien and Fleming (1979) tests based on the standardized statistics. In the second part of this work, the operating room scheduling problem is considered, which is also important in medical studies. The national health insurance system has been conducted more than ten years in Taiwan. The Bureau of National Health Insurance continues to improve the national health insurance system and try to establish a reasonable fee ratio for people in different income ranges. In accordance to the adjustment of the national health insurance system, hospitals must pay more attention to control the running cost. One of the major hospital's revenues is generated by its surgery center operations. In order to maintain financial balance, effective operating room management is necessary. For this topic, this study focuses on the model fitting of operating times and operating room scheduling. Log-normal and mixture log-normal distributions are identified to be acceptable statistically in describing these operating times. The procedure is illustrated through analysis of thirteen operations performed in the gynecology department of a major teaching hospital in southern Taiwan. The best fitting distributions are used to evaluate performances of some operating combinations on daily schedule, which occurred in real data. The fitted distributions are selected through certain information criteria and bootstrapping the log-likelihood ratio test. Moreover, we also classify the operations into three different categories as well as three stages for each operation. Then based on the classification, a strategy of efficient scheduling is proposed. The benefits of rescheduling based on the proposed strategy are compared with the original scheduling observed.
53

Heavy-tail statistical monitoring charts of the active managers' performance

Chen, Chun-Cheng 03 August 2006 (has links)
Many performance measurement algorithms can only evaluate measure active managers' performance after a period of operating time. However, most investors are interested in monitoring the active managers' performances at any time, especially, when the performance is going down. So that the investors can adjust the targets and contents of their portfolios to reduce their risks. Yashchin,Thomas and David (1997) proposed to use a statistical quality control (SQC) procedure to monitor active managers' performances. In particular, they established the IR (Information Ratio) control charts under normality assumption to monitor the dynamic performances of active managers. However, the distributions of IR statistic usually possess fat tail property. Since the underlying distribution of IR is an important hypothesis in building up the control chart, we consider the heavy tail distributions, such as mixture normal and generalized error distribution to fit the IR data. Based on the fitted distribution, the IR control charts are rebuilt. By simulations and empirical studies, the remedial control charts are found to detect the shifts of active managers' performances more sensitively.
54

A novel approach to modeling and predicting crash frequency at rural intersections by crash type and injury severity level

Deng, Jun, active 2013 24 March 2014 (has links)
Safety at intersections is of significant interest to transportation professionals due to the large number of possible conflicts that occur at those locations. In particular, rural intersections have been recognized as one of the most hazardous locations on roads. However, most models of crash frequency at rural intersections, and road segments in general, do not differentiate between crash type (such as angle, rear-end or sideswipe) and injury severity (such as fatal injury, non-fatal injury, possible injury or property damage only). Thus, there is a need to be able to identify the differential impacts of intersection-specific and other variables on crash types and severity levels. This thesis builds upon the work of Bhat et al., (2013b) to formulate and apply a novel approach for the joint modeling of crash frequency and combinations of crash type and injury severity. The proposed framework explicitly links a count data model (to model crash frequency) with a discrete choice model (to model combinations of crash type and injury severity), and uses a multinomial probit kernel for the discrete choice model and introduces unobserved heterogeneity in both the crash frequency model and the discrete choice model, while also accommodates excess of zeros. The results show that the type of traffic control and the number of entering roads are the most important determinants of crash counts and crash type/injury severity, and the results from our analysis underscore the value of our proposed model for data fit purposes as well as to accurately estimate variable effects. / text
55

Paklaidos įvertis Centrinėje ribinėje teoremoje / Error estimate in the Central limit theorem

Kasparavičiūtė, Aurelija 19 June 2008 (has links)
Šiame magistriniame darbe yra nagrinėjami nepriklausomi vienodai pasiskirstę atsitiktiniai dydžiai, turintys visus absoliutinius baigtinius momentus. Magistrinio darbo tikslas - atlikti konvergavimo greičio į normalųjį dėsnį įvertinimą. Darbą sudaro aštuoni skyriai. Įvade aprašoma problema ir visi tyrimo parametrai. Antrasis skyrius skirtas teoriniai analizei. Šiame skyriuje pateikiamos svarbiausios teorinės žinios ir metodai, kurie bus taikomi magistrinio darbo uždaviniams bei tikslams įgyvendinti. Trečiame skyriuje nagrinėjami kumuliantai Bernulio schemos atveju, o ketvirtame - analizuojamas Čebyšovo asimptotinis skleidinys ir pasinaudojus matematiniu paketu Maple, grafiniu būdu, tyrinėjamas jo konvergavimas. Aproksimacijos normaliuoju dėsniu tikslumui įvertinti naudojamas charakteristinių funkcijų metodas, todėl penktasis skyrius yra skiriamas suglodinimo nelygybių patikslinimui. Šeštame skyriuje, pasinaudojus turimais rezultatais, realizuojamas magistrinio darbo tikslas, o septintame - patikrinamas absoliutinės paklaidos įvertis Bernulio schemos atveju. Išvados ir rezultatai glaustai išdėstomi aštuntame skyriuje. / This master thesis considers independiant and identically distributed random variables, having absolute finite moments. The main task is to determine error estimate of the normal approximation. The work consists of eight chapters. In the introduction are considered problems and all subjects of research. The second chapter is designed for the theory analysis. Here are placed the main theoretical studies and methods that are used to solve the aims of the master thesis. The third chapter is intended to deal with cumulants in case of the Bernoulli’s distribution, the fourth one - is analyzing the Čebyšova’s asymptotic expansion and it convergence with the help of the mathematical package Maple. The method of characteristic’s functions is used to find the remainder term of the normal approximation, so the fifth chapter is designed to specify smoothing inequalities. Based on these results, the main task of the master thesis was obtained and specified in the sixth chapter. In the seventh one the error estimate in case of Bernoulli’s distribution, was examined with a mathematical package Maple. The short conclusions and results are placed in the eighth chapter.
56

Avaliação de valores em risco em séries de retorno financeiro / Value at risk evaluation in financial return time series

Camilla Ferreira Gomes 18 December 2017 (has links)
Os métodos geralmente empregados no mercado para o cálculo de medidas de risco baseiam-se na distribuição adotada para os retornos financeiros. Quando a distribuição Normal é adotada, estas avaliações tendem a subestimar o Value at Risk (valor em risco - VaR), pois a distribuição Normal tem caudas mais leves que as observadas nas séries financeiras. Muitas distribuições alternativas vêm sendo propostas na literatura, contudo qualquer modelo alternativo proposto deve ser avaliado com relação ao esforço computacional gasto para cálculo do valor em risco e comparado à simplicidade proporcionada pelo uso da distribuição Normal. Dessa forma, esta dissertação visa avaliar alguns modelos para cálculo do valor em risco, como a modelagem por quantis empíricos, a distribuição Normal e o modelo autorregressivo (AR), para verificação do melhor ajuste à cauda das distribuições das séries de retornos financeiros, além de avaliar o impacto do VaR para o ano seguinte. Nesse contexto, destaca-se o modelo autorregressivo com heterocedasticidade condicional (ARCH) capaz de detectar a volatilidade envolvida nas séries financeiras de retorno. Esse modelo tem-se mostrado mais eficiente, capaz de gerar informações relevantes aos investidores e ao mercado financeiro, com um esforço computacional moderado. / The most used methods for risk evaluation in the financial market usually depend strongly on the distribution assigned to the financial returns. When we assign a normal distribution, results tend to underestimate the Value at Risk (VaR), since the normal distribution usually has a lighter tail than those from the empirical distribution of financial time series. Many other distributions have been proposed in the literature, but we need to evaluate their computational effort for obtaining the value at risk when compared to the easiness of calculation of the normal distribution. In this work, we compare several models for calculating the value at risk, such as the normal, the empirical-quantile and the autoregressive (AR) models, evaluating their goodness-of-fit to the tail of the distribution of financial return time series and the impact of applying the calculated VaR to the following year. We also highlight the autoregressive conditional heteroskedasticity (ARCH) model due to its performance in detecting the volatility in the series. The ARCH model has proved to be efficient and able to generate relevant information to the investors and to the financial market with a moderate computational cost.
57

Um modelo de resposta ao item para grupos múltiplos com distribuições normais assimétricas centralizadas / A multiple group IRT model with skew-normal latent trait distribution under the centred parametrization

Santos, José Roberto Silva dos, 1984- 20 August 2018 (has links)
Orientador: Caio Lucidius Naberezny Azevedo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-20T09:23:25Z (GMT). No. of bitstreams: 1 Santos_JoseRobertoSilvados_M.pdf: 2068782 bytes, checksum: f8dc91d2f7f6091813ba229dc12991f4 (MD5) Previous issue date: 2012 / Resumo: Uma das suposições dominantes nos modelos de resposta ao item (MRI) é a suposição de normalidade simétrica para modelar o comportamento dos traços latentes. No entanto, tal suposição tem sido questionada em vários trabalhos como, por exemplo, nos trabalhos de Micceri (1989) e Bazán et.al (2006). Recentemente Azevedo et.al (2011) propuseram um MRI com distribuição normal assimétrica centralizada para os traços latentes, considerando a estrutura de um único grupo de indivíduos. No presente trabalho fazemos uma extensão desse modelo para o caso de grupos múltiplos. Desenvolvemos dois algoritmos MCMC para estimação dos parâmetros utilizando a estrutura de dados aumentados para representar a função de resposta ao item (FRI), veja Albert (1992). O primeiro é um amostrador de Gibbs com passos de Metropolis-Hastings. No segundo utilizamos representações estocásticas (gerando uma estrutura hierárquica) das densidades a priori dos traços latentes e parâmetros populacionais conseguindo, assim, formas conhecidas para todas as distribuições condicionais completas, o que nos possibilitou desenvolver o amostrador de Gibbs completo. Comparamos esses algoritmos utilizando como critério o tamanho efetivo de amostra, veja Sahu (2002). O amostrador de Gibbs completo obteve o melhor desempenho. Também avaliamos o impacto do número de respondentes por grupo, número de itens por grupo, número de itens comuns, assimetria da distribuição do grupo de referência e priori, na recuperação dos parâmetros. Os resultados indicaram que nosso modelo recuperou bem todos os parâmetros, principalmente, quando utilizamos a priori de Jeffreys. Além disso, o número de itens por grupo e o número de examinados por grupo, mostraram ter um alto impacto na recuperação dos traços latentes e parâmetros dos itens, respectivamente. Analisamos um conjunto de dados reais que apresenta indícios de assimetria na distribuição dos traços latentes de alguns grupos. Os resultados obtidos com o nosso modelo confirmam a presença de assimetria na maioria dos grupos. Estudamos algumas medidas de diagnóstico baseadas na distribuição preditiva de medidas de discrepância adequadas. Por último, comparamos os modelos simétrico e assimétrico utilizando os critérios sugeridos por Spiegelhalter et al. (2002). O modelo assimétrico se ajustou melhor aos dados segundo todos os critérios / Abstract: An usual assumption for parameter estimation in the Item Response Models (IRM) is to assume that the latent traits are random variables which follow a normal distribution. However, many works suggest that this assumption does not apply in many cases. For example, the works of Micceri (1989) and Bazán (2006). Recently Azevedo et.al (2011) proposed an IRM with skew-normal distribution under the centred parametrization for the latent traits, considering one single group of examinees. In the present work, we developed an extension of this model to account for multiple groups. We developed two MCMC algorithms to parameter estimation using the augmented data structure to represent the Item response function (IRF), see Albert (1992). The First is a Metropolis-Hastings within Gibbs sampling. In the second, we use stochastic representations (creating a hierarchical structure) in the prior distribution of the latent traits and population parameters. Therefore, we obtained known full conditional distributions, which enabled us to develop the full Gibbs sampler. We compared these algorithms using the effective sample size criteria, see Sahu (2002). The full Gibbs sampling presented the best performance. We also evaluated the impact of the number of examinees per group, number of items per group, number of common items, priors and asymmetry of the reference group, on the parameter recovery. The results indicated that our approach recovers properly all parameters, mainly, when we consider the Jeffreys prior. Furthermore, the number of items per group and the number of examinees per group, showed to have a high impact on the recovery of the true of latent traits and item parameters, respectively. We analyze a real data set in which we found an evidence of asymmetry in the distribution of latent traits of some groups. The results obtained with our model confirmed the presence of asymmetry in most groups. We studied some diagnostic measures based on predictive distribution of appropriate discrepancy measures. Finally, we compared the symmetric and asymmetric models using the criteria suggested by Spiegelhalter et al. (2002). The asymmetrical model fits better according to all criteria / Mestrado / Estatistica / Mestre em Estatística
58

Analýza síly testů hypotéz / Statistical tests power analysis

Kubrycht, Pavel January 2016 (has links)
This Thesis deals with the power of a statistical test and the associated problem of determining the appropriate sample size. It should be large enough to meet the requirements of the probabilities of errors of both the first and second kind. The aim of this Thesis is to demonstrate theoretical methods that result in derivation of formulas for minimum sample size determination. For this Thesis, three important probability distributions have been chosen: Normal, Bernoulli, and Exponential.
59

Modelos de regressão Birnbaum-Saunders baseados na distribuição normal assimétrica centrada / Birnbaum-Saunders regression models based on skew-normal centered distribution

Chaves, Nathalia Lima, 1989- 26 August 2018 (has links)
Orientadores: Caio Lucidius Naberezny Azevedo, Filidor Edilfonso Vilca Labra / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-26T22:33:37Z (GMT). No. of bitstreams: 1 Chaves_NathaliaLima_M.pdf: 3044792 bytes, checksum: 8fea3cd9d074997b605026a7a4526c35 (MD5) Previous issue date: 2015 / Resumo: A classe de modelos Birnbaum-Saunders (BS) foi desenvolvida a partir de problemas que surgiram na área de confiabilidade de materiais. Tais problemas, em geral, são ligados ao estudo de fadiga de materiais. No entanto, nos últimos tempos, essa classe de modelos tem sido aplicada em áreas fora do referido contexto como, por exemplo, em ciências da saúde, ambiental, florestal, demográficas, atuariais, financeira, entre outras, devido à sua grande versatilidade. Neste trabalho desenvolvemos a distribuição Birnbaum-Saunders (BS) baseada na normal assimétrica padrão sob a parametrização centrada (BSNAC) que, além de representar uma extensão da distribuição BS usual, apresenta diversas vantagens em relação à distribuição BS baseada na distribuição normal assimétrica sob a parametrização usual. Desenvolvemos também um modelo de regressão linear log-Birnbaum-Saunders. Apresentamos, tanto para a distribuição BSNAC quanto para o respectivo modelo de regressão, diversas propriedades. Desenvolvemos procedimentos de estimação sob os enfoques frenquentista e bayesiano, bem como ferramentas de diagnóstico para os modelos propostos, contemplando análise residual e medidas de influência. Realizamos estudos de simulação, considerando diferentes cenários, com o intuito de comparar as estimativas frequentistas e bayesianas, bem como avaliar o desempenho das medidas de diagnóstico. A metodologia aqui proposta foi ilustrada tanto com dados provenientes de estudos de simulação, quanto com conjuntos de dados reais / Abstract: The class of Birnbaum-Saunders (BS) models was developed from problems that arose in the field of material reliability. These problems generally are related to the study of material fatigue. However, in the last years, this class of models has been applied in areas outside that context, such as in health sciences, environmental, forestry, demographic, actuarial, financial, among others, due to its great versatility. In this work, we developed the skew-normal Birnbaum-Saunders distribution under the centered parameterization (BSNAC), which also represents an extension of the usual BS distribution and presents several advantages over the BS distribution based on the skew-normal distribution under the usual parameterization. We also developed a log-Birnbaum-Saunders linear regression model. We present several properties of both BSNAC distribution and the related regression model. We develop estimation procedures under the frequentist and Bayesian approaches, as well as diagnostic tools for the proposed models, contemplating residual analysis and measures of influence. We conducted simulation studies considering different scenarios, in order to compare the frequentist and Bayesian estimates and evaluate the performance of diagnostic measures. The methodology proposed here is illustrated with data sets from both simulation studies and real data sets / Mestrado / Estatistica / Mestra em Estatística
60

Computation of High-Dimensional Multivariate Normal and Student-t Probabilities Based on Matrix Compression Schemes

Cao, Jian 22 April 2020 (has links)
The first half of the thesis focuses on the computation of high-dimensional multivariate normal (MVN) and multivariate Student-t (MVT) probabilities. Chapter 2 generalizes the bivariate conditioning method to a d-dimensional conditioning method and combines it with a hierarchical representation of the n × n covariance matrix. The resulting two-level hierarchical-block conditioning method requires Monte Carlo simulations to be performed only in d dimensions, with d ≪ n, and allows the dominant complexity term of the algorithm to be O(n log n). Chapter 3 improves the block reordering scheme from Chapter 2 and integrates it into the Quasi-Monte Carlo simulation under the tile-low-rank representation of the covariance matrix. Simulations up to dimension 65,536 suggest that this method can improve the run time by one order of magnitude compared with the hierarchical Monte Carlo method. The second half of the thesis discusses a novel matrix compression scheme with Kronecker products, an R package that implements the methods described in Chapter 3, and an application study with the probit Gaussian random field. Chapter 4 studies the potential of using the sum of Kronecker products (SKP) as a compressed covariance matrix representation. Experiments show that this new SKP representation can save the memory footprint by one order of magnitude compared with the hierarchical representation for covariance matrices from large grids and the Cholesky factorization in one million dimensions can be achieved within 600 seconds. In Chapter 5, an R package is introduced that implements the methods in Chapter 3 and show how the package improves the accuracy of the computed excursion sets. Chapter 6 derives the posterior properties of the probit Gaussian random field, based on which model selection and posterior prediction are performed. With the tlrmvnmvt package, the computation becomes feasible in tens of thousands of dimensions, where the prediction errors are significantly reduced.

Page generated in 0.1217 seconds