• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 452
  • 41
  • 32
  • 17
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 582
  • 582
  • 582
  • 578
  • 131
  • 104
  • 95
  • 89
  • 59
  • 57
  • 55
  • 53
  • 47
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Statistical estimation of the locations of lightning events

Elhor, Aicha 01 April 2000 (has links)
No description available.
112

Toward a scalable Bayesian workflow

Yao, Yuling January 2021 (has links)
A scalable Bayesian workflow needs the combination of fast but reliable computing, efficient but targeted model evaluation, and extensive but directed model building and expansion. In this thesis, I develop a sequence of methods to push the scalability frontier of the workflow. First, I study diagnostics of Bayesian computing. The Pareto smoothed importance sampling stabilizes importance weights using a generalized Pareto distribution fit to the upper tail of the distribution of the simulated importance ratios. The method, which empirically performs better than existing methods for stabilizing importance sampling estimates, includes stabilized effective sample size estimates, Monte Carlo error estimates and convergence diagnostics. For variational inference, I propose two diagnostic algorithms. The Pareto smoothed importance sampling diagnostic gives a goodness of fit measurement for joint distributions, while the variational simulation-based calibration assesses the average performance of point estimates. I further apply this importance sampling strategy to causal inference and develop diagnostics for covariate imbalance in observational studies. Second, I develop a solution to continuous model expansion using adaptive path sampling and tempering. This development is relevant to both model-building and computing in the workflow. For the former, I provide an automated way to connect models via a geometric bridge such that a supermodel encompasses individual models as a special case. For the latter, I use adaptive path sampling as a preferred strategy to estimating the normalizing constant and marginal density, based on which I propose two metastable sampling schemes. The continuous simulated tempering aims at multimodal posterior sampling, and the implicit divide-and-conquer sampler aims for a funnel-shaped entropic barrier. Both schemes are highly automated and empirically perform better than existing methods for sampling from metastable distributions. Last, a complete Bayesian workflow distinguishes itself from a one-shot data analysis by its enthusiasm for multiple model fittings, and open-mindedness to model misspecification. I take the idea of stacking from the point estimation literature and generalize to the combination of Bayesian predictive distributions. Using importance sampling based leave-one-out approximation, stacking is computationally efficient. I compare stacking, Bayesian model averaging, and several variants in a decision theory framework. I further apply the stacking strategy to multimodal sampling in which Markov chain Monte Carlo algorithms can have difficulty moving between modes. The result from stacking is not necessarily equivalent, even asymptotically, to fully Bayesian inference, but it serves many of the same goals. Under misspecified models, stacking can give better predictive performance than full Bayesian inference, hence the multimodality can be considered a blessing rather than a curse. Furthermore, I show that stacking is most effective when the model predictive performance is heterogeneous in inputs, such that it can be further improved by hierarchical modeling. To this end, I develop hierarchical stacking, in which the model weights are input-varying yet partially-pooled, and further generalize this method to incorporate discrete and continuous inputs, other structured priors, and time-series and longitudinal data—big data need big models, and big models need big model evaluation, and big model evaluation itself needs extra data collection and model building.
113

Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint

林達明, Lin, Daming. January 1995 (has links)
published_or_final_version / Statistics / Doctoral / Doctor of Philosophy
114

Bayesian analysis of errors-in-variables in generalized linear models

鄧沛權, Tang, Pui-kuen. January 1992 (has links)
published_or_final_version / Statistics / Doctoral / Doctor of Philosophy
115

Development of high performance implantable cardioverter defibrillatorbased statistical analysis of electrocardiography

Kwan, Siu-ki., 關兆奇. January 2007 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
116

Bayesian carrier frequency offset estimation in orthogonal frequency division multiplexing systems

Cai, Kun, 蔡琨 January 2009 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
117

Online auction price prediction: a Bayesian updating framework based on the feedback history

Yang, Boye., 扬博野. January 2009 (has links)
published_or_final_version / Business / Master / Master of Philosophy
118

The effectiveness of hedge fund strategies and managers’ skills during market crises: a fuzzy, non-parametric and Bayesian analysis

05 November 2012 (has links)
Ph.D. / This thesis investigates the persistence of hedge fund managers’ skills, the optimality of strategies they use to outperform consistently the market during periods of boom and/or recession, and the market risk encountered thereby. We consider a data set of monthly investment strategy indices published by Hedge Fund Research group. The data set spans from January 1995 to June 2010. We divide this sample period into four overlapping sub- sample periods that contain different economic market trends. We define a skilled manager as a manager who can outperform the market consistently during two consecutive sub-sample periods. To investigate the presence of managerial skills among hedge fund managers we first distinguish between outperformance, selectivity and market timing skills. We thereafter employ three different econometric models: frequentist, Bayesian and fuzzy regression, in order to estimate outperformance, selectivity and market timing skills using both linear and quadratic CAPM. Persistence in performance is carried out in three different fashions: contingence table, chi-square test and cross-sectional auto-regression technique. The results obtained with the first two probabilistic methods (frequentist and Bayesian) show that fund managers have skills to outperform the market during the period of positive economic growth (i.e. between sub-sample period 1 and sub-sample period 3). This market outperformance is due to both selectivity skill (during sub-sample period 2 and sub-sample period 3), and market timing skill (during sub-sample period 1 and sub- sample period 2). These results contradict the EMH and suggest that the “market is not always efficient,” it is possible to make abnormal rate of returns.However, the results obtained with the uncertainty fuzzy credibility method show that dispite the presence of few fund managers who possess selectivity skills during bull market period (sub-sample period 2 and sub-sample period 3), and market timing skills during recovery period (sub-sample period 3 and sub-sample period 4); there is no evidence of overall market outperformance during the entire sample period. Therefore the fuzzy credibility results support the appeal of the EMH according to which no economic agent can make risk-adjusted abnormal rate of return. The difference in findings obtained with the probabilistic method (frequentist and Bayesian) and uncertainty method (fuzzy credibility theory) is primarily due to the way uncertainty is modelled in the hedge fund universe in particular and in financial markets in general. Probability differs fundamentally from uncertainty: probability assumes that the total number of states of economy is known, whereas uncertainty assumes that the total number of states of economy is unknown. Furthermore, probabilistic methods rely on the assumption that asset returns are normally distributed and that transaction costs are negligible.
119

Generalizing the number of states in Bayesian belief propagation, as applied to portfolio management.

Kruger, Jan Walters. January 1996 (has links)
A research report submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in partial fulfillment of the requirements for the degree of Master of' Science. / This research report describes the use or the Pearl's algorithm in Bayesian belief networks to induce a belief network from a database. With a solid grounding in probability theory, the Pearl algorithm allows belief updating by propagating likelihoods of leaf nodes (variables) and the prior probabilities. The Pearl algorithm was originally developed for binary variables and a generalization to more states is investigated. The data 'Used to test this new method, in a Portfolio Management context, are the Return and various attributes of companies listed on the Johannesburg Stock Exchange ( JSE ). The results of this model is then compared to a linear regression model. The bayesian method is found to perform better than a linear regression approach. / Andrew Chakane 2018
120

Various Approaches on Parameter Estimation in Mixture and Non-Mixture Cure Models

Unknown Date (has links)
Analyzing life-time data with long-term survivors is an important topic in medical application. Cure models are usually used to analyze survival data with the proportion of cure subjects or long-term survivors. In order to include the propor- tion of cure subjects, mixture and non-mixture cure models are considered. In this dissertation, we utilize both maximum likelihood and Bayesian methods to estimate model parameters. Simulation studies are carried out to verify the nite sample per- formance of the estimation methods. Real data analyses are reported to illustrate the goodness-of- t via Fr echet, Weibull and Exponentiated Exponential susceptible distributions. Among the three parametric susceptible distributions, Fr echet is the most promising. Next, we extend the non-mixture cure model to include a change point in a covariate for right censored data. The smoothed likelihood approach is used to address the problem of a log-likelihood function which is not di erentiable with respect to the change point. The simulation study is based on the non-mixture change point cure model with an exponential distribution for the susceptible subjects. The simulation results revealed a convincing performance of the proposed method of estimation. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection

Page generated in 0.0919 seconds