• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 10
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Uma análise sobre duas medidas de evidência: p-valor e s-valor / An analysis on two measures of evidence: p-value and s-value

Eriton Barros dos Santos 04 August 2016 (has links)
Este trabalho tem como objetivo o estudo de duas medidas de evidência, a saber: o p-valor e o s-valor. A estatística da razão de verossimilhanças é utilizada para o cálculo dessas duas medidas de evidência. De maneira informal, o p-valor é a probabilidade de ocorrer um evento extremo sob as condições impostas pela hipótese nula, enquanto que o s-valor é o maior nível de significância da região de confiança tal que o espaço paramétrico sob a hipótese nula e a região de confiança tenham ao menos um elemento em comum. Para ambas as medidas, quanto menor forem seus respectivos valores, maior é o grau de inconsistência entre os dados observados e a hipótese nula postulada. O estudo será restrito a hipóteses nulas simples e compostas, considerando independência e distribuição normal para os dados. Os resultados principais deste trabalho são: 1) obtenção de fórmulas analíticas para o p-valor, utilizando probabilidades condicionais, e para o s-valor; e 2) comparação entre o p-valor e o s-valor em diferentes cenários, a saber: variância conhecida e desconhecida, e hipóteses nulas simples e compostas. Para hipóteses nulas simples, o s-valor coincide com o p-valor, e quando as hipóteses nulas são compostas, a relação entre o p-valor e o s-valor são complexas. No caso da variância conhecida, se a hipótese nula for uma semi-reta o p-valor é majorado pelo s-valor, se a hipótese é um intervalo fechado a diferença entre as duas medidas de evidência diminui conforme o comprimento do intervalo da hipótese testada. No caso de variância desconhecida e hipóteses nulas compostas, o s-valor é majorado pelo p-valor para valores pequenos do s-valor, por exemplo, quando o s-valor é menor do que 0.05. / This work aims to study two measures of evidence, namely: the p-value and s-value. The likelihood ratio statistic is used to calculate these two evidence measures. Informally, the p-value is the probability of an extreme event under the conditions imposed by the null hypothesis, while the s-value is the greatest confidence level of the confidence region such that the parameter space under the null hypothesis and the confidence region have at least one element in common. For both measures, the smaller are the respective values, the greater is the degree of inconsistency between the observed values and the null hypothesis. In this study, we will consider simple and composite null hypotheses and it will be restricted to independently and normally distributed data. The main results are: 1) to obtain the analytical formulas for the p-value, by using conditional probabilities, and for the s-value, and 2) to compare the p-value and s-value under different scenarios, namely: known and unknown variance, and simple and composite null hypotheses. For simple null hypotheses, the s-value coincides with the p-value, and for composite null hypotheses, the p-value and the s-value relationships are complex. In the case of known variance, if the null hypothesis is a half-line the p-value is smaller than the s-value, if the null hypothesis is a closed interval the difference between the two measures of evidence decreases with the interval width specified in the null hypothesis. In the case of unknown variance and composite hypotheses, the s-value is smaller than the p-value when the value of the s-value is small.
12

Probabilistic pairwise model comparisons based on discrepancy measures and a reconceptualization of the p-value

Riedle, Benjamin N. 01 May 2018 (has links)
Discrepancy measures are often employed in problems involving the selection and assessment of statistical models. A discrepancy gauges the separation between a fitted candidate model and the underlying generating model. In this work, we consider pairwise comparisons of fitted models based on a probabilistic evaluation of the ordering of the constituent discrepancies. An estimator of the probability is derived using the bootstrap. In the framework of hypothesis testing, nested models are often compared on the basis of the p-value. Specifically, the simpler null model is favored unless the p-value is sufficiently small, in which case the null model is rejected and the more general alternative model is retained. Using suitably defined discrepancy measures, we mathematically show that, in general settings, the Wald, likelihood ratio (LR) and score test p-values are approximated by the bootstrapped discrepancy comparison probability (BDCP). We argue that the connection between the p-value and the BDCP leads to potentially new insights regarding the utility and limitations of the p-value. The BDCP framework also facilitates discrepancy-based inferences in settings beyond the limited confines of nested model hypothesis testing.
13

Invariant Procedures for Model Checking, Checking for Prior-Data Conflict and Bayesian Inference

Jang, Gun Ho 13 August 2010 (has links)
We consider a statistical theory as being invariant when the results of two statisticians' independent data analyses, based upon the same statistical theory and using effectively the same statistical ingredients, are the same. We discuss three aspects of invariant statistical theories. Both model checking and checking for prior-data conflict are assessments of single null hypothesis without any specific alternative hypothesis. Hence, we conduct these assessments using a measure of surprise based on a discrepancy statistic. For the discrete case, it is natural to use the probability of obtaining a data point that is less probable than the observed data. For the continuous case, the natural analog of this is not invariant under equivalent choices of discrepancies. A new method is developed to obtain an invariant assessment. This approach also allows several discrepancies to be combined into one discrepancy via a single P-value. Second, Bayesians developed many noninformative priors that are supposed to contain no information concerning the true parameter value. Any of these are data dependent or improper which can lead to a variety of difficulties. Gelman (2006) introduced the notion of the weak informativity as a comprimise between informative and noninformative priors without a precise definition. We give a precise definition of weak informativity using a measure of prior-data conflict that assesses whether or not a prior places its mass around the parameter values having relatively high likelihood. In particular, we say a prior Pi_2 is weakly informative relative to another prior Pi_1 whenever Pi_2 leads to fewer prior-data conflicts a priori than Pi_1. This leads to a precise quantitative measure of how much less informative a weakly informative prior is. In Bayesian data analysis, highest posterior density inference is a commonly used method. This approach is not invariant to the choice of dominating measure or reparametrizations. We explore properties of relative surprise inferences suggested by Evans (1997). Relative surprise inferences which compare the belief changes from a priori to a posteriori are invariant under reparametrizations. We mainly focus on the connection of relative surprise inferences to classical Bayesian decision theory as well as important optimalities.
14

Invariant Procedures for Model Checking, Checking for Prior-Data Conflict and Bayesian Inference

Jang, Gun Ho 13 August 2010 (has links)
We consider a statistical theory as being invariant when the results of two statisticians' independent data analyses, based upon the same statistical theory and using effectively the same statistical ingredients, are the same. We discuss three aspects of invariant statistical theories. Both model checking and checking for prior-data conflict are assessments of single null hypothesis without any specific alternative hypothesis. Hence, we conduct these assessments using a measure of surprise based on a discrepancy statistic. For the discrete case, it is natural to use the probability of obtaining a data point that is less probable than the observed data. For the continuous case, the natural analog of this is not invariant under equivalent choices of discrepancies. A new method is developed to obtain an invariant assessment. This approach also allows several discrepancies to be combined into one discrepancy via a single P-value. Second, Bayesians developed many noninformative priors that are supposed to contain no information concerning the true parameter value. Any of these are data dependent or improper which can lead to a variety of difficulties. Gelman (2006) introduced the notion of the weak informativity as a comprimise between informative and noninformative priors without a precise definition. We give a precise definition of weak informativity using a measure of prior-data conflict that assesses whether or not a prior places its mass around the parameter values having relatively high likelihood. In particular, we say a prior Pi_2 is weakly informative relative to another prior Pi_1 whenever Pi_2 leads to fewer prior-data conflicts a priori than Pi_1. This leads to a precise quantitative measure of how much less informative a weakly informative prior is. In Bayesian data analysis, highest posterior density inference is a commonly used method. This approach is not invariant to the choice of dominating measure or reparametrizations. We explore properties of relative surprise inferences suggested by Evans (1997). Relative surprise inferences which compare the belief changes from a priori to a posteriori are invariant under reparametrizations. We mainly focus on the connection of relative surprise inferences to classical Bayesian decision theory as well as important optimalities.
15

Infarkto gydymo įvairiais vaistais statistiniai tyrimo metodai / The statistical methods of investigation of the infarction treatment with the help of different drugs

Stasiukaitytė, Irma 10 June 2004 (has links)
The goal of the present thesis is to ascertain the impact of different drugs, intended for the infarction treatment; investigation of the other factors, which may stipulate bleeding in the course of the operation and within the post-operation period. The investigation was carried out in two stages. During the first stage the data was accumulated for processing (investigation of the sample homogeneity and normality); the second stage implied solution of the statistical tasks (solution of the tasks, which correspond to the goals of the thesis). The methods of data analysis and the models of binary logistic and linear logistic regression were applied. 89 patients, who survived the myocarditis infarction, were investigated and it was ascertained that there is no huge difference in between the tranexamic acid and aprotinin. The bleeding complications may be caused by aspirin, which has been used before the operation. One of the complications, i.e. the drainage, may be predicted, judging from the amount of haemoglobin, haematocrit in the blood as well as creatinin. The model of the binary logistic regression assisted us in drawing the conclusion that smoking, hypothermia, euroscore and other factors produce an impact upon the bleeding complications.
16

Influence of commodity costs on the price of FMCG products / Influence of commodity costs on the price of FMCG products

Baituyakova, Danagul January 2015 (has links)
The goal of this thesis is to provide a reader with a comprehensive outlook on the cost-pricing process in a real FMCG company. Firstly, the thesis concentrates on the theoretical background of cost methodologies and pricing strategies from a perspective of a private firm. Secondly, the thesis presents a tool, which calculates the reflection of the change in the commodity cost on the shelf price of a good. Thirdly, statistical testing is applied in order to identify if the model could correlate with reality based on historical data. In this part the thesis discusses the limitations of the model and gives more real life examples of how the price is set, besides the commodity influence. Thanks to it, a reader will be able to draw conclusions from the given information and deeper understand the complexity of the FMCG market industry.
17

Online and Face-To-Face Orthopaedic Surgery Education Methods

Austin, Erin, Glenn, L. Lee 01 January 2012 (has links)
No description available.
18

Portfolio s maximálním výnosem / Maximum Return Portfolio

Palko, Maximilián January 2019 (has links)
Classical method of portfolio selection is based on minimizing the variabi- lity of the portfolio. The Law of Large Numbers tells us that in case of longer investment horizon it should be enough to invest in the asset with the highest expected return which will eventually outperform any other portfolio. In our thesis we will suggest some portfolio creation methods which will create Maxi- mum Return Portfolios. These methods will be based on finding the asset with maximal expected return. That way we will avoid the problem of estimation errors of expected returns. Two of those methods will be selected based on the results of simulation analysis. Those two methods will be tested with the real stock data and compared with the S&P 500 index. Results of the testing suggest that our portfolios could have an application in the real world. Mainly because our portfolios showed to be significantly better than the index in the case of 10 year investment horizon. 1
19

Design of adaptive multi-arm multi-stage clinical trials

Ghosh, Pranab Kumar 28 February 2018 (has links)
Two-arm group sequential designs have been widely used for over forty years, especially for studies with mortality endpoints. The natural generalization of such designs to trials with multiple treatment arms and a common control (MAMS designs) has, however, been implemented rarely. While the statistical methodology for this extension is clear, the main limitation has been an efficient way to perform the computations. Past efforts were hampered by algorithms that were computationally explosive. With the increasing interest in adaptive designs, platform designs, and other innovative designs that involve multiple comparisons over multiple stages, the importance of MAMS designs is growing rapidly. This dissertation proposes a group sequential approach to design MAMS trial where the test statistic is the maximum of the cumulative score statistics for each pair-wise comparison, and is evaluated at each analysis time point with respect to efficacy and futility stopping boundaries while maintaining strong control of the family wise error rate (FWER). In this dissertation we start with a break-through algorithm that will enable us to compute MAMS boundaries rapidly. This algorithm will make MAMS design a practical reality. For designs with efficacy-only boundaries, the computational effort increases linearly with number of arms and number of stages. For designs with both efficacy and futility boundaries the computational effort doubles with successive increases in number of stages. Previous attempts to obtain MAMS boundaries were confined to smaller problems because their computational effort grew exponentially with number of arms and number of stages. We will next extend our proposed group sequential MAMS design to permit adaptive changes such as dropping treatment arms and increasing the sample size at each interim analysis time point. In order to control the FWER in the presence of these adaptations the early stopping boundaries must be re-computed by invoking the conditional error rate principle and the closed testing principle. This adaptive MAMS design is immensely useful in phase~2 and phase~3 settings. An alternative to the group sequential approach for MAMS design is the p-value combination approach. This approach has been in place for the last fifteen years.This alternative MAMS approach is based on combining independent p-values from the incremental data of each stage. Strong control of the FWER for this alternative approach is achieved by closed testing. We will compare the operating characteristics of the two approaches both analytically and empirically via simulation. In this dissertation we will demonstrate that the MAMS group sequential approach dominates the traditional p-value combination approach in terms of statistical power.
20

Joint Models for the Association of Longitudinal Binary and Continuous Processes With Application to a Smoking Cessation Trial

Liu, Xuefeng, Daniels, Michael J., Marcus, Bess 01 June 2009 (has links)
Joint models for the association of a longitudinal binary and a longitudinal continuous process are proposed for situations in which their association is of direct interest. The models are parameterized such that the dependence between the two processes is characterized by unconstrained regression coefficients. Bayesian variable selection techniques are used to parsimoniously model these coefficients. A Markov chain Monte Carlo (MCMC) sampling algorithm is developed for sampling from the posterior distribution, using data augmentation steps to handle missing data. Several technical issues are addressed to implement the MCMC algorithm efficiently. The models are motivated by, and are used for, the analysis of a smoking cessation clinical trial in which an important question of interest was the effect of the (exercise) treatment on the relationship between smoking cessation and weight gain.

Page generated in 0.1132 seconds