• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 5
  • 5
  • 2
  • Tagged with
  • 54
  • 54
  • 12
  • 11
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Exploring the optimal Transformation for Volatility

Volfson, Alexander 29 April 2010 (has links)
This paper explores the fit of a stochastic volatility model, in which the Box-Cox transformation of the squared volatility follows an autoregressive Gaussian distribution, to the continuously compounded daily returns of the Australian stock index. Estimation was difficult, and over-fitting likely, because more variables are present than data. We developed a revised model that held a couple of these variables fixed and then, further, a model which reduced the number of variables significantly by grouping trading days. A Metropolis-Hastings algorithm was used to simulate the joint density and derive estimated volatilities. Though autocorrelations were higher with a smaller Box-Cox transformation parameter, the fit of the distribution was much better.
12

Bayesian and Empirical Bayes Approaches to Power Law Process and Microarray Analysis

Chen, Zhao 12 July 2004 (has links)
In this dissertation, we apply Bayesian and Empirical Bayes methods for reliability growth models based on the power law process. We also apply Bayes methods for the study of microarrays, in particular, in the selection of differentially expressed genes. The power law process has been used extensively in reliability growth models. Chapter 1 reviews some basic concepts in reliability growth models. Chapter 2 shows classical inferences on the power law process. We also assess the goodness of fit of a power law process for a reliability growth model. In chapter 3 we develop Bayesian procedures for the power law process with failure truncated data, using non-informative priors for the scale and location parameters. In addition to obtaining the posterior density of parameters of the power law process, prediction inferences for the expected number of failures in some time interval and the probability of future failure times are also discussed. The prediction results for the software reliability model are illustrated. We compare our result with the result of Bar-Lev,S.K. et al. Also, posterior densities of several parametric functions are given. Chapter 4 provides Empirical Bayes for the power law process with natural conjugate priors and nonparametric priors. For the natural conjugate priors, two-hyperparameter prior and a more generalized three-hyperparameter prior are used. In chapter 5, we review some basic statistical procedures that are involved in microarray analysis. We will also present and compare several transformation and normalization methods for probe level data. The objective of chapter 6 is to select differentially expressed genes from tens of thousands of genes. Both classical methods (fold change, T-test, Wilcoxon Rank-sum Test, SAM and local Z-score and Empirical Bayes methods (EBarrays and LIMMA) are applied to obtain the results. Outputs of a typical classical method and a typical Empirical Bayes Method are discussed in detail.
13

Validity Generalization and Transportability: An Investigation of Distributional Assumptions of Random-Effects Meta-Analytic Methods

Kisamore, Jennifer L 09 June 2003 (has links)
Validity generalization work over the past 25 years has called into question the veracity of the assumption that validity is situationally specific. Recent theoretical and methodological work has suggested that validity coefficients may be transportable even if true validity is not a constant. Most transportability work is based on the assumption that the distribution of rho ( ρi ) is normal, yet, no empirical evidence exists to support this assumption. The present study used a competing model approach in which a new procedure for assessing transportability was compared with two more commonly used methods. Empirical Bayes estimation (Brannick, 2001; Brannick & Hall, 2003) was evaluated alongside both the Schmidt-Hunter multiplicative model (Hunter & Schmidt, 1990) and a corrected Hedges-Vevea (see Hall & Brannick, 2002; Hedges & Vevea, 1998) model. The purpose of the present study was two-fold. The first part of the study compared the accuracy of estimates of the mean, standard deviation, and the lower bound of 90 and 99 percent credibility intervals computed from the three different methods across 32 simulated conditions. The mean, variance, and shape of the distribution varied across the simulated conditions. The second part of the study involved comparing results of analyses of the three methods based on previously published validity coefficients. The second part of the study was used to show whether choice of method for determining whether transportability is warranted matters in practice. Results of the simulation analyses suggest that the Schmidt-Hunter method is superior to the other methods even when the distribution of true validity parameters violates the assumption of normality. Results of analyses conducted on real data show trends consistent with those evident in the analyses of the simulated data. Conclusions regarding transportability, however, did not change as a function of method used for any of the real data sets. Limitations of the present study as well as recommendations for practice and future research are provided.
14

DEFT guessing: using inductive transfer to improve rule evaluation from limited data

Reid, Mark Darren, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Algorithms that learn sets of rules describing a concept from its examples have been widely studied in machine learning and have been applied to problems in medicine, molecular biology, planning and linguistics. Many of these algorithms used a separate-and-conquer strategy, repeatedly searching for rules that explain different parts of the example set. When examples are scarce, however, it is difficult for these algorithms to evaluate the relative quality of two or more rules which fit the examples equally well. This dissertation proposes, implements and examines a general technique for modifying rule evaluation in order to improve learning performance in these situations. This approach, called Description-based Evaluation Function Transfer (DEFT), adjusts the way rules are evaluated on a target concept by taking into account the performance of similar rules on a related support task that is supplied by a domain expert. Central to this approach is a novel theory of task similarity that is defined in terms of syntactic properties of rules, called descriptions, which define what it means for rules to be similar. Each description is associated with a prior distribution over classification probabilities derived from the support examples and a rule's evaluation on a target task is combined with the relevant prior using Bayes' rule. Given some natural conditions regarding the similarity of the target and support task, it is shown that modifying rule evaluation in this way is guaranteed to improve estimates of the true classification probabilities. Algorithms to efficiently implement Deft are described, analysed and used to measure the effect these improvements have on the quality of induced theories. Empirical studies of this implementation were carried out on two artificial and two real-world domains. The results show that the inductive transfer of evaluation bias based on rule similarity is an effective and practical way to improve learning when training examples are limited.
15

Examining the Effects of Site-Selection Criteria for Evaluating the Effectiveness of Traffic Safety Improvement Countermeasures

Kuo, Pei-Fen 2012 May 1900 (has links)
The before-after study is still the most popular method used by traffic engineers and transportation safety analysts for evaluating the effects of an intervention. However, this kind of study may be plagued by important methodological limitations, which could significantly alter the study outcome. They include the regression-to-the-mean (RTM) and site-selection effects. So far, most of the research on these biases has focused on the RTM. Hence, the primary objective of this study consists of presenting a method that can reduce the site-selection bias when an entry criterion is used in before-after studies for continuous (e.g. speed, reaction times, etc.) and count data (e.g. number of crashes, number of fatalities, etc.). The proposed method documented in this research provides a way to adjust the Naive estimator by using the sample data and without relying on the data collected from the control group, since finding enough appropriate sites for the control group is much harder in traffic-safety analyses. In this study, the proposed method, a.k.a. Adjusted method, was compared to commonly used methods in before-after studies. The study results showed that among all methods evaluated, the Naive is the most significantly affected by the selection bias. Using the CG, the ANCOVA, or the EB method based on a control group (EBCG) method can eliminate the site-selection bias, as long as the characteristics of the control group are exactly the same as those for the treatment group. However, control group data that have same characteristics based on a truncated distribution or sample may not be available in practice. Moreover, site-selection bias generated by using a dissimilar control group might be even higher than with using the Naive method. The Adjusted method can partially eliminate site-selection bias even when biased estimators of the mean, variance, and correlation coefficient of a truncated normal distribution are used or are not known with certainty. In addition, three actual datasets were used to evaluate the accuracy of the Adjusted method for estimating site-selection biases for various types of data that have different mean and sample-size values.
16

Efficient Tools For Reliability Analysis Using Finite Mixture Distributions

Cross, Richard J. (Richard John) 02 December 2004 (has links)
The complexity of many failure mechanisms and variations in component manufacture often make standard probability distributions inadequate for reliability modeling. Finite mixture distributions provide the necessary flexibility for modeling such complex phenomena but add considerable difficulty to the inference. This difficulty is overcome by drawing an analogy to neural networks. With appropropriate modifications, a neural network can represent a finite mixture CDF or PDF exactly. Training with Bayesian Regularization gives an efficient empirical Bayesian inference of the failure time distribution. Training also yields an effective number of parameters from which the number of components in the mixture can be estimated. Credible sets for functions of the model parameters can be estimated using a simple closed-form expression. Complete, censored, and inpection samples can be considered by appropriate choice of the likelihood function. In this work, architectures for Exponential, Weibull, Normal, and Log-Normal mixture networks have been derived. The capabilities of mixture networks have been demonstrated for complete, censored, and inspection samples from Weibull and Log-Normal mixtures. Furthermore, mixture networks' ability to model arbitrary failure distributions has been demonstrated. A sensitivity analysis has been performed to determine how mixture network estimator errors are affected my mixture component spacing and sample size. It is shown that mixture network estimators are asymptotically unbiased and that errors decay with sample size at least as well as with MLE.
17

Domain knowledge, uncertainty, and parameter constraints

Mao, Yi 24 August 2010 (has links)
No description available.
18

Validity generalization and transportability [electronic resource] : an investigation of random-effects meta-analytic methods / by Jennifer L. Kisamore.

Kisamore, Jennifer L. January 2003 (has links)
Includes vita. / Title from PDF of title page. / Document formatted into pages; contains 134 pages. / Thesis (Ph.D.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: Validity generalization work over the past 25 years has called into question the veracity of the assumption that validity is situationally specific. Recent theoretical and methodological work has suggested that validity coefficients may be transportable even if true validity is not a constant. Most transportability work is based on the assumption that the distribution of rho ( ) is normal, yet, no empirical evidence exists to support this assumption. The present study used a competing model approach in which a new procedure for assessing transportability was compared with two more commonly used methods. Empirical Bayes estimation (Brannick, 2001; Brannick & Hall, 2003) was evaluated alongside both the Schmidt-Hunter multiplicative model (Hunter & Schmidt, 1990) and a corrected Hedges-Vevea (see Hall & Brannick, 2002; Hedges & Vevea, 1998) model. The purpose of the present study was two-fold. The first part of the study compared the accuracy of estimates of the mean, standard deviation, and the lower bound of 90 and 99 percent credibility intervals computed from the three different methods across 32 simulated conditions. The mean, variance, and shape of the distribution varied across the simulated conditions. The second part of the study involved comparing results of analyses of the three methods based on previously published validity coefficients. The second part of the study was used to show whether choice of method for determining whether transportability is warranted matters in practice. Results of the simulation analyses suggest that the Schmidt-Hunter method is superior to the other methods even when the distribution of true validity parameters violates the assumption of normality. Results of analyses conducted on real data show trends consistent with those evident in the analyses of the simulated data. Conclusions regarding transportability, however, did not change as a function of method used for any of the real data sets. Limitations of the present study as well as recommendations for practice and future research are provided. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
19

Duomenų tyrybos empirinių Bajeso metodų tyrimas ir taikymas / Analysis and application of empirical Bayes methods in data mining

Jakimauskas, Gintautas 23 April 2014 (has links)
Darbo tyrimų objektas yra duomenų tyrybos empiriniai Bajeso metodai ir algoritmai, taikomi didelio matavimų skaičiaus didelių populiacijų duomenų analizei. Darbo tyrimų tikslas yra sudaryti metodus ir algoritmus didelių populiacijų neparametrinių hipotezių tikrinimui ir duomenų modelių parametrų vertinimui. Šiam tikslui pasiekti yra sprendžiami tokie uždaviniai: 1. Sudaryti didelio matavimo duomenų skaidymo algoritmą. 2. Pritaikyti didelio matavimo duomenų skaidymo algoritmą neparametrinėms hipotezėms tikrinti. 3. Pritaikyti empirinį Bajeso metodą daugiamačių duomenų komponenčių nepriklausomumo hipotezei tikrinti su skirtingais matematiniais modeliais, nustatant optimalų modelį ir atitinkamą empirinį Bajeso įvertinį. 4. Sudaryti didelių populiacijų retų įvykių dažnių vertinimo algoritmą panaudojant empirinį Bajeso metodą palyginant Puasono-gama ir Puasono-Gauso matematinius modelius. 5. Sudaryti retų įvykių logistinės regresijos algoritmą panaudojant empirinį Bajeso metodą. Darbo metu gauti nauji rezultatai įgalina atlikti didelio matavimo duomenų skaidymą; atlikti didelio matavimo nekoreliuotų duomenų pasirinktų komponenčių nepriklausomumo tikrinimą; parinkti didelių populiacijų retų įvykių optimalų modelį ir atitinkamą empirinį Bajeso įvertinį. Pateikta nesinguliarumo sąlyga Puasono-gama modelio atveju. / The research object is data mining empirical Bayes methods and algorithms applied in the analysis of large populations of large dimensions. The aim and objectives of the research are to create methods and algorithms for testing nonparametric hypotheses for large populations and for estimating the parameters of data models. The following problems are solved to reach these objectives: 1. To create an efficient data partitioning algorithm of large dimensional data. 2. To apply the data partitioning algorithm of large dimensional data in testing nonparametric hypotheses. 3. To apply the empirical Bayes method in testing the independence of components of large dimensional data vectors. 4. To develop an algorithm for estimating probabilities of rare events in large populations, using the empirical Bayes method and comparing Poisson-gamma and Poisson-Gaussian mathematical models, by selecting an optimal model and a respective empirical Bayes estimator. 5. To create an algorithm for logistic regression of rare events using the empirical Bayes method. The results obtained enables us to perform very fast and efficient partitioning of large dimensional data; testing the independence of selected components of large dimensional data; selecting the optimal model in the estimation of probabilities of rare events, using the Poisson-gamma and Poisson-Gaussian mathematical models and empirical Bayes estimators. The nonsingularity condition in the case of the Poisson-gamma model is presented.
20

Analysis and application of empirical Bayes methods in data mining / Duomenų tyrybos empirinių Bajeso metodų tyrimas ir taikymas

Jakimauskas, Gintautas 23 April 2014 (has links)
The research object is data mining empirical Bayes methods and algorithms applied in the analysis of large populations of large dimensions. The aim and objectives of the research are to create methods and algorithms for testing nonparametric hypotheses for large populations and for estimating the parameters of data models. The following problems are solved to reach these objectives: 1. To create an efficient data partitioning algorithm of large dimensional data. 2. To apply the data partitioning algorithm of large dimensional data in testing nonparametric hypotheses. 3. To apply the empirical Bayes method in testing the independence of components of large dimensional data vectors. 4. To develop an algorithm for estimating probabilities of rare events in large populations, using the empirical Bayes method and comparing Poisson-gamma and Poisson-Gaussian mathematical models, by selecting an optimal model and a respective empirical Bayes estimator. 5. To create an algorithm for logistic regression of rare events using the empirical Bayes method. The results obtained enables us to perform very fast and efficient partitioning of large dimensional data; testing the independence of selected components of large dimensional data; selecting the optimal model in the estimation of probabilities of rare events, using the Poisson-gamma and Poisson-Gaussian mathematical models and empirical Bayes estimators. The nonsingularity condition in the case of the Poisson-gamma model is presented. / Darbo tyrimų objektas yra duomenų tyrybos empiriniai Bajeso metodai ir algoritmai, taikomi didelio matavimų skaičiaus didelių populiacijų duomenų analizei. Darbo tyrimų tikslas yra sudaryti metodus ir algoritmus didelių populiacijų neparametrinių hipotezių tikrinimui ir duomenų modelių parametrų vertinimui. Šiam tikslui pasiekti yra sprendžiami tokie uždaviniai: 1. Sudaryti didelio matavimo duomenų skaidymo algoritmą. 2. Pritaikyti didelio matavimo duomenų skaidymo algoritmą neparametrinėms hipotezėms tikrinti. 3. Pritaikyti empirinį Bajeso metodą daugiamačių duomenų komponenčių nepriklausomumo hipotezei tikrinti su skirtingais matematiniais modeliais, nustatant optimalų modelį ir atitinkamą empirinį Bajeso įvertinį. 4. Sudaryti didelių populiacijų retų įvykių dažnių vertinimo algoritmą panaudojant empirinį Bajeso metodą palyginant Puasono-gama ir Puasono-Gauso matematinius modelius. 5. Sudaryti retų įvykių logistinės regresijos algoritmą panaudojant empirinį Bajeso metodą. Darbo metu gauti nauji rezultatai įgalina atlikti didelio matavimo duomenų skaidymą; atlikti didelio matavimo nekoreliuotų duomenų pasirinktų komponenčių nepriklausomumo tikrinimą; parinkti didelių populiacijų retų įvykių optimalų modelį ir atitinkamą empirinį Bajeso įvertinį. Pateikta nesinguliarumo sąlyga Puasono-gama modelio atveju.

Page generated in 0.038 seconds