• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sharpening the Boundaries of the Sequential Probability Ratio Test

Krantz, Elizabeth 01 May 2012 (has links)
In this thesis, we present an introduction to Wald’s Sequential Probability Ratio Test (SPRT) for binary outcomes. Previous researchers have investigated ways to modify the stopping boundaries that reduce the expected sample size for the test. In this research, we investigate ways to further improve these boundaries. For a given maximum allowable sample size, we develop a method intended to generate all possible sets of boundaries. We then find the one set of boundaries that minimizes the maximum expected sample size while still preserving the nominal error rates. Once the satisfying boundaries have been created, we present the results of simulation studies conducted on these boundaries as a means for analyzing both the expected number of observations and the amount of variability in the sample size required to make a decision in the test.
2

Sequential Procedures for the "Selection" Problems in Discrete Simulation Optimization

Wenyu Wang (7491243) 17 October 2019 (has links)
<div>The simulation optimization problems refer to the nonlinear optimization problems whose objective function can be evaluated through stochastic simulations. We study two significant discrete simulation optimization problems in this thesis: Ranking and Selection (R&S) and Factor Screening (FS). Both R&S and FS are the "selection" problems defined upon a finite set of candidate systems or factors. They vary mainly in their objectives: the R&S problems is to find the "best" system(s) among all alternatives; whereas the FS is to select important factors that are critical to the stochastic systems. </div><div><br></div><div>In this thesis, we develop efficient sequential procedures for these two problems. For the R&S problem, we propose fully-sequential procedures for selecting the "best" systems with a guaranteed probability of correct selection (PCS). The main features of the stated methods are: (1) a Bonferroni-free model, these procedures overcome the conservativeness of the Bonferroni correction and deliver the exact probabilistic guarantee without overshooting; (2) asymptotic optimality, these procedures achieve the lower bound of average sample size asymptotically; (3) an indifference-zone-flexible formulation, these procedures bridge the gap between the indifference-zone formulation and the indifference-zone-free formulation so that the indifference-zone parameter is not indispensable but could be helpful if provided. We establish the validity and asymptotic efficiency for the proposed procedure and conduct numerical studies to investigates the performance under multiple configurations.</div><div><br></div><div>We also consider the multi-objective R&S (MOR&S) problem. To the best of our knowledge, the procedure proposed is the first frequentist approach for MOR&S. These procedures identify the Pareto front with a guaranteed probability of correct selection (PCS). In particular, these procedures are fully sequential using the test statistics built upon the Generalized Sequential Probability Ratio Test (GSPRT). The main features are: 1) an objective-dimension-free model, the performance of these procedures do not deteriorate as the number of objectives increases, and achieve the same efficiency as KN family procedures for single-objective ranking and selection problem; 2) an indifference-zone-flexible formulation, the new methods eliminate the necessity of indifference-zone parameter while makes use of the indifference-zone information if provided. A numerical evaluation demonstrates the validity efficiency of the new procedure.</div><div><br></div><div>For the FS problem, our objective is to identify important factors for simulation experiments with controlled Family-Wise Error Rate. We assume a Multi-Objective first-order linear model where the responses follow a multivariate normal distribution. We offer three fully-sequential procedures: Sum Intersection Procedure (SUMIP), Sort Intersection Procedure (SORTIP), and Mixed Intersection procedure (MIP). SUMIP uses the Bonferroni correction to adjust for multiple comparisons; SORTIP uses the Holms procedure to overcome the conservative of the Bonferroni method, and MIP combines both SUMIP and SORTIP to work efficiently in the parallel computing environment. Numerical studies are provided to demonstrate the validity and efficiency, and a case study is presented.</div>
3

The Cointegration of Exchange、Interetest Rate、Money Supply、 Real GNP---the Application of Johansen Sequential Testing Procedure

吳明修, Wu, Ming-Shou Unknown Date (has links)
本文的主要目的為比較不同模型或者不同檢定所產生的不同結果。選擇的遞延期數(lag)不同,則所得到的模型與共積數目也不一樣,使得不同遞延期數(lag)、不同模型 所得到的共積關係也不相同。所以本文將驗證在不同遞延期數下,所得到的不同模型共積關係的表現。因此希望利用Johansen Sequential Testing Procedure來同時決定共積關係數目及資料產生過程(DGP),而且也能探討共積關係。但其仍有缺點,例如Johansen Sequential Testing Procedure所取的遞延期數不同則所選定的模型也將不一樣,另一點是共積係數的估計值只是一basis。
4

Interfaces between Bayesian and Frequentist Multiplte Testing

CHANG, SHIH-HAN January 2015 (has links)
<p>This thesis investigates frequentist properties of Bayesian multiple testing procedures in a variety of scenarios and depicts the asymptotic behaviors of Bayesian methods. Both Bayesian and frequentist approaches to multiplicity control are studied and compared, with special focus on understanding the multiplicity control behavior in situations of dependence between test statistics.</p><p>Chapter 2 examines a problem of testing mutually exclusive hypotheses with dependent data. The Bayesian approach is shown to have excellent frequentist properties and is argued to be the most effective way of obtaining frequentist multiplicity control without sacrificing power. Chapter 3 further generalizes the model such that multiple signals are acceptable, and depicts the asymptotic behavior of false positives rates and the expected number of false positives. Chapter 4 considers the problem of dealing with a sequence of different trials concerning some medical or scientific issue, and discusses the possibilities for multiplicity control of the sequence. Chapter 5 addresses issues and efforts in reconciling frequentist and Bayesian approaches in sequential endpoint testing. We consider the conditional frequentist approach in sequential endpoint testing and show several examples in which Bayesian and frequentist methodologies cannot be made to match.</p> / Dissertation
5

The Estimation and Evaluation of Optimal Thresholds for Two Sequential Testing Strategies

Wilk, Amber R. 17 July 2013 (has links)
Many continuous medical tests often rely on a threshold for diagnosis. There are two sequential testing strategies of interest: Believe the Positive (BP) and Believe the Negative (BN). BP classifies a patient positive if either the first test is greater than a threshold θ1 or negative on the first test and greater than θ2 on the second test. BN classifies a patient positive if the first test is greater than a threshold θ3 and greater than θ4 on the second test. Threshold pairs θ = (θ1, θ2) or (θ3, θ4), depending on strategy, are defined as optimal if they maximized GYI = Se + r(Sp – 1). Of interest is to determine if these optimal threshold, or optimal operating point (OOP), estimates are “good” when calculated from a sample. The methods proposed in this dissertation derive formulae to estimate θ assuming tests follow a binormal distribution, using the Newton-Raphson algorithm with ridging. A simulation study is performed assessing bias, root mean square error, percentage of over estimation of Se/Sp, and coverage of simultaneous confidence intervals and confidence regions for sets of population parameters and sample sizes. Additionally, OOPs are compared to the traditional empirical approach estimates. Bootstrapping is used to estimate the variance of each optimal threshold pair estimate. The study shows that parameters such as the area under the curve, ratio of standard deviations of disease classification groups within tests, correlation between tests within a disease classification, total sample size, and allocation of sample size to each disease classification group were all influential on OOP estimation. Additionally, the study shows that this method is an improvement over the empirical estimate. Equations for researchers to use in estimating total sample size and SCI width are also developed. Although the models did not produce high coefficients of determination, they are a good starting point for researchers when designing a study. A pancreatic cancer dataset is used to illustrate the OOP estimation methodology for sequential tests.
6

COST AND ACCURACY COMPARISONS IN MEDICAL TESTING USING SEQUENTIAL TESTING STRATEGIES

Ahmed, Anwar 14 May 2010 (has links)
The practice of sequential testing is followed by the evaluation of accuracy, but often not by the evaluation of cost. This research described and compared three sequential testing strategies: believe the negative (BN), believe the positive (BP) and believe the extreme (BE), the latter being a less-examined strategy. All three strategies were used to combine results of two medical tests to diagnose a disease or medical condition. Descriptions of these strategies were provided in terms of accuracy (using the maximum receiver operating curve or MROC) and cost of testing (defined as the proportion of subjects who need 2 tests to diagnose disease), with the goal to minimize the number of tests needed for each subject while maintaining test accuracy. It was shown that the cost of the test sequence could be reduced without sacrificing accuracy beyond an acceptable range by setting an acceptable tolerance (q) on maximum test sensitivity. This research introduced a newly-developed ROC curve reflecting this reduced sensitivity and cost of testing called the Minimum Cost Maximum Receiver Operating Characteristic (MCMROC) curve. Within these strategies, four different parameters that could influence the performance of the combined tests were examined: the area under the curve (AUC) of each individual test, the ratio of standard deviations (b) from assumed underlying disease and non-disease populations, correlation (rho) between underlying disease populations, and disease prevalence. The following patterns were noted: Under all parameter settings, the MROC curve of the BE strategy never performed worse than the BN and BP strategies, and it most frequently had the lowest cost. The parameters tended to have less of an effect on the MROC and MCMROC curves than they had on the cost curves, which were affected greatly. The AUC values and the ratio of standard deviations both had a greater effect on cost curves, MROC curves, and MCMROC curves than prevalence and correlation. The use of BMI and plasma glucose concentration to diagnose diabetes in Pima Indians was presented as an example of a real-world application of these strategies. It was found that the BN and BE strategies were the most consistently accurate and least expensive choice.
7

Valuation and Optimal Strategies in Markets Experiencing Shocks

Dyrssen, Hannah January 2017 (has links)
This thesis treats a range of stochastic methods with various applications, most notably in finance. It is comprised of five articles, and a summary of the key concepts and results these are built on. The first two papers consider a jump-to-default model, which is a model where some quantity, e.g. the price of a financial asset, is represented by a stochastic process which has continuous sample paths except for the possibility of a sudden drop to zero. In Paper I prices of European-type options in this model are studied together with the partial integro-differential equation that characterizes the price. In Paper II the price of a perpetual American put option in the same model is found in terms of explicit formulas. Both papers also study the parameter monotonicity and convexity properties of the option prices. The third and fourth articles both deal with valuation problems in a jump-diffusion model. Paper III concerns the optimal level at which to exercise an American put option with finite time horizon. More specifically, the integral equation that characterizes the optimal boundary is studied. In Paper IV we consider a stochastic game between two players and determine the optimal value and exercise strategy using an iterative technique. Paper V employs a similar iterative method to solve the statistical problem of determining the unknown drift of a stochastic process, where not only running time but also each observation of the process is costly.

Page generated in 0.1186 seconds