• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 518
  • 53
  • 47
  • 32
  • 12
  • 12
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • Tagged with
  • 771
  • 771
  • 601
  • 588
  • 135
  • 116
  • 101
  • 89
  • 68
  • 64
  • 61
  • 60
  • 60
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

A comparison of a supplementary sample non-parametric empirical Bayes estimator with the classical estimator in a quality control situation

Gabbert, James Tate January 1968 (has links)
The purpose of this study was to compare the effectiveness of the classical estimator with that of a supplementary sample non-parametric empirical Bayes estimator in detecting an out-of-control situation arising in statistical quality control work. The investigation was accomplished through Monte Carlo simulation on the IBM-7040/1401 system at the Virginia Polytechnic Institute Computing Center, Blacksburg, Virginia. In most cases considered in this study, the sole criterion for accepting or rejecting the hypothesis that the industrial process is in control was the location of the estimate on the control chart for fraction defectives. If an estimate fell outside the 30 control limits, that particular batch was said to have been produced by an out-of-control system. In other cases the concept of "runs" was included as an additional criterion for acceptance or rejection. Also considered were various parameters, such as the mean in-control fraction defectives, the mean out-of-control fraction defectives, the~first sample size, the standard deviation of the supplementary sample estimates, and the number of past experiences used in computing the empirical Bayes estimator. The Monte Carlo studies showed that, for almost any set of parameter values, the empirical Bayes estimator is much more effective in detecting an out-of-control situation than is the classical estimator. The most notable advantage gained by using the empirical Bayes estimator is that long-range lack of detection is virtually impossible. / M.S.
342

Comparison of Bayes' and minimum variance unbiased estimators of reliability in the extreme value life testing model

Godbold, James Homer January 1970 (has links)
The purpose of this study is to consider two different types of estimators for reliability using the extreme value distribution as the life-testing model. First the unbiased minimum variance estimator is derived. Then the Bayes' estimators for the uniform, exponential, and inverted gamma prior distributions are obtained, and these results are extended to a whole class of exponential failure models. Each of the Bayes' estimators is compared with the unbiased minimum variance estimator in a Monte Carlo simulation where it is shown that the Bayes' estimator has smaller squared error loss in each case. The problem of obtaining estimators with respect to an exponential type loss function is also considered. The difficulties in such an approach are demonstrated. / Master of Science
343

Some parametric empirical Bayes techniques

Rutherford, John Ross January 1965 (has links)
This thesis considers two distinct aspects of the empirical Bayes decision problem. The first aspect considered is the problem or point estimation and hypothesis testing. The second aspect considered is that of estimating the prior distribution and then the estimation of posterior distribution and confidence intervals. In the first aspect considered we assume that there exists an unobservable parameter space 𝔏={λ} on which is defined a prior distribution G(λ). For any action a from a class A there is a loss, L(a,λ) ≥ 0, which we incur when we take action a and the true parameter is λ. There exists an observable random vector X̰=(X₁,...X<sub>k</sub>), k ≥ 1, which admits of a sufficient statistic T=T(X̰) for λ. The conditional density function (c.d.f.) of T is f(t(λ). We assume that there exists a decision function δ<sub>ɢ</sub>(t) from a class D (δεD) implies that δ(t)εA for all t) such that the expected loss, R(δ,G) = ∫∫L(δ(t),λ) f(t|λ)dtdG(λ), is minimized. This minimizing decision function is called a Bayes decision function and the associated minimum expected loss is called the Bayes risk R(G). We assume that there exists a sequence or independent identically distributed random vectors <(X₁,...,X<sub>k</sub>,λ)<sub>n</sub>> (or <(T,λ)<sub>n</sub> >) with each element distributed independently of and identically to (X₁,...,X<sub>𝗄</sub>,λ) (or (T,λ). The problem is to construct sequential decision functions, δ<sub>n</sub>(t;t₁,t₂,...,t<sub>n</sub>)=δ<sub>n</sub>(t), which are asymptotically optimal (a.o.), that is which satisfy lim<sub>n→∞</sub> R(δ<sub>n</sub>(T),G) = R(G). We extend a theorem or Robbins (Ann. Math. Statist. 35,1-20) to provide a simple method for the construction or a.o. point estimators of λ with a squared-error loss function when f(t|λ) is continuous. We extend the results or Samuel (Ann. Math. Statist., 34, 1370-1385) to provide a.o. tests of certain parametric hypotheses with loss functions which are polynomials in λ. The c.d.f.'s which are considered are all continuous and include some or those of the exponential class and some whose range depends upon the parameter. This latter result is applied to the problem or the one-sided truncation of density functions. The usefulness of all or these results is predicated upon the tact that the forms or the Bayes decision functions can be determined in such a way that the construction or the analogous a.o. empirical Bayes decision functions is simple. Two manipulative techniques, which provide the desired forms of the Bayes decision function, are introduced. These techniques are applied to several examples, and a.o. decision functions are defined. To estimate the prior distribution we assume that there exists a sequence of independent identically distributed random vectors <(T,λ)<sub>n</sub>>) each distributed according to the joint density function J(t,λ)=G(λ)F(t|λ). The sequence <λ<sub>n</sub>> of <(T,λ)<sub>n</sub>> is unobservable. G(λ) belongs to a subclass g of a class G<sub>p</sub>(g) and F(t|λ) belongs to a class F. G<sub>p</sub>(g) is determined by the conditions: (a) G(λ) is absolutely continuous with with respect to Lebesgum measure; (b) its density function, g(λ), is determined completely by a continuous function of its first p moments, p ≥ 2; (c) the first p moments are finite; (d) the subclass g contains those density functions which are determined by a particular known continuous function. The class F is determined by the condition that there exist known functions h<sub>𝗸</sub>(.), k=1,...,p, such that E[h<sub>𝗸</sub>(T)|λ] = λᵏ. Under these conditions we construct an estimate, Gn(λ), of G(λ) such that lim<sub>n→∞</sub> E[(G<sub>n</sub>(λ) - G(λ))²] = 0, a.e.λ. Estimates of the posterior distribution and of confidence intervals are constructed using G<sub>n</sub>(λ). / Ph. D.
344

Empirical Bayes estimators for the cross-product ratio of 2x2 contingency tables

Lee, Luen-Fure January 1981 (has links)
In a routinely occurring estimation problem, the experimenter can often consider the interested parameters themselves as random variables with unknown prior distribution. Without knowledge of the exact prior distribution the Bayes estimator cannot be obtained. However, as long as independent repetitions of the experiment occur, the empirical Bayes approach can then be applied. A general strategy underlying the empirical Bayes estimator consists of finding the Bayes estimator in a form which can be estimated sequentially by using the past data. Such use of the data circumvents knowledge of the prior distribution. Three different types of sampling distributions of cell counts of 2x2 contingency tables were considered. In the Independent Poisson Case, an empirical Bayes estimator for the cross-product ratio is presented. If the squared error loss function is used, this empirical Bayes estimator, ᾶ, has asymptotic risk only ε > 0 larger than the true Bayes risk. For the Product Binomial and Multinomial situations, several empirical Bayes estimators for α are proposed. Basically, these 'empirical' Bayes estimators by-pass the prior distribution by estimating the marginal probabilities P(X₁₁,X₂₁,X₁₂,X₂₂) and P(X₁₁+1,X₂₁-1,X₁₂-1,X₂₂+1), where (X₁₁,X₂₁,X₁₂,X₂₂) is the set of current cell counts. Furthermore, because of the assumption of varying sample size(s), they will have asymptotic risk only ε > 0 away from the true Bayes risk if both the number of past experiences and the sample size(s) are sufficiently large. Results of Monte Carlo simulation of empirical Bayes estimators are presented for the carefully selected prior distributions. Mean squared errors for these estimators and classical estimators were compared. The improvement of empirical Bayes over classical estimators was found to be dependent upon the prior means, the prior variances, the prior distribution of the parameters considered as random variables, and sample size(s). These conclusions are summarized, and tables are provided. The empirical Bayes estimators of α start to show significant improvement over classical estimators for as few as only ten past experiences. In many instances, the improvement is something on the order of 15% with only ten past experiences and sample size(s) larger than twenty. However, for the cases where the prior variances are very large, the empirical Bayes estimator indicates neither better nor worse over the classical. Greater improvement is shown for more past experiences until around thirty when the improvement appears stabilized. Finally, the other existing estimators for a which also take into account past experiences are discussed and compared to the corresponding empirical Bayes estimator. They were proposed respectively by Birch, Goodman, Mantel and Haenszel, Woolf, etc. The simulation study for comparisons indicate that empirical Bayes estimators outmatch them even with small prior variance. A test for deciding when empirical Bayes estimators of α should be used is also suggested and discussed. / Ph. D.
345

Empirical bayes estimation via wavelet series

Alotaibi, Mohammed B. 01 April 2003 (has links)
No description available.
346

Understanding Music Semantics and User Behavior with Probabilistic Latent Variable Models

Liang, Dawen January 2016 (has links)
Bayesian probabilistic modeling provides a powerful framework for building flexible models to incorporate latent structures through likelihood model and prior. When we specify a model, we make certain assumptions about the underlying data-generating process with respect to these latent structures. For example, the latent Dirichlet allocation (LDA) model assumes that when generating a document, we first select a latent topic and then select a word that often appears in the selected topic. We can uncover the latent structures conditioned on the observed data via posterior inference. In this dissertation, we apply the tools of probabilistic latent variable models and try to understand complex real-world data about music semantics and user behavior. We first look into the problem of automatic music tagging -- inferring the semantic tags (e.g., "jazz'', "piano'', "happy'', etc.) from the audio features. We treat music tagging as a matrix completion problem and apply the Poisson matrix factorization model jointly on the vector-quantized audio features and a "bag-of-tags'' representation. This approach exploits the shared latent structure between semantic tags and acoustic codewords. We present experimental results on the Million Song Dataset for both annotation and retrieval tasks, illustrating the steady improvement in performance as more data is used. We then move to the intersection between music semantics and user behavior: music recommendation. The leading performance in music recommendation is achieved by collaborative filtering methods which exploit the similarity patterns in user's listening history. We address the fundamental cold-start problem of collaborative filtering: it cannot recommend new songs that no one has listened to. We train a neural network on semantic tagging information as a content model and use it as a prior in a collaborative filtering model. The proposed system is evaluated on the Million Song Dataset and shows comparably better result than the collaborative filtering approaches, in addition to the favorable performance in the cold-start case. Finally, we focus on general recommender systems. We examine two different types of data: implicit and explicit feedback, and introduce the notion of user exposure (whether or not a user is exposed to an item) as part of the data-generating process, which is latent for implicit data and observed for explicit data. For implicit data, we propose a probabilistic matrix factorization model and infer the user exposure from data. In the language of causal analysis (Imbens and Rubin, 2015), user exposure has close connection to the assignment mechanism. We leverage this connection more directly for explicit data and develop a causal inference approach to recommender systems. We demonstrate that causal inference for recommender systems leads to improved generalization to new data. Exact posterior inference is generally intractable for latent variables models. Throughout this thesis, we will design specific inference procedure to tractably analyze the large-scale data encountered under each scenario.
347

Normative uncertainty

MacAskill, William January 2014 (has links)
Very often, we are unsure about what we ought to do. Under what conditions should we help to improve the lives of distant strangers rather than those of our family members? At what point does an embryo or foetus become a person, with all the rights that that entails? Is it ever permissible to raise and kill non-human animals in order to use their meat for food? Sometimes, this uncertainty arises out of empirical uncertainty: we might not know to what extent non-human animals feel pain, or how much we are really able to improve the lives of distant strangers compared to our family members. But this uncertainty can also arise out of fundamental normative uncertainty: out of not knowing, for example, what moral weight the wellbeing of distant strangers has compared to the wellbeing of our family; or whether non-human animals are worthy of moral concern even given knowledge of all the facts about their biology and psychology. In fact, for even moderately reflective agents, decision-making under normative uncertainty is ubiquitous. Given this, one might have expected philosophers to have devoted considerable research time to the question of how one ought to take one’s normative uncertainty into account in one’s decisions. But the issue has been largely neglected. This thesis attempts to begin to fill this gap. It addresses the question: what ought one to do when one is uncertain about what one ought to do? It develops a view that I call metanormativism: the view that there are second-order norms that govern action that are relative to a decision-maker’s uncertainty about first-order normative claims. In consists of two distinct parts. The first part (Chapters 1-4) develops a general metanormative theory. I argue in favour of the view that decision-makers should maximise expected choice-worthiness, treating normative uncertainty analogously with how they treat empirical uncertainty. I defend this view at length in response to two key problems, which I call the problems of merely ordinal theories and the problem of intertheoretic comparisons. The second part (Chapters 5-7) explores the implications of metanormativism for other philosophical issues. I suggest that it has important implications for the theory of rational action in the face of incomparable values, for the causal/evidential debate in decision-theory, and for the value we should ascribe to research into moral philosophy.
348

On the welfare economics of climate change

Dennig, Francis January 2014 (has links)
The three constituent chapters of this thesis tackle independent, self-contained research questions, all concerning welfare economics in general and its application to climate change policy in particular. Climate change is a policy problem for which the costs and benefits are distributed unequally across space and time, as well as one involving a high degree of uncertainty. Therefore, cost-benefit analysis of climate policy ought to be based on a welfare function that is sufficiently sophisticated to incorporate the three dimensions of aggregation: time, risk and space. Chapter 1 is an axiomatic treatment of a stylised model in which all three dimensions appear. The main result is a functional representation of the social welfare function for policy assessment in such situations. Chapter 2 is a numerical mitigation policy analysis. I modify William Nordhaus' RICE-2010 model by replacing his social welfare function with one that allows for different degrees of inequality aversion along the regional and inter-temporal dimension. I find that, holding the inter-temporal coefficient of inequality aversion fixed, performing the optimisation with a greater degree of regional inequality reduces the optimal carbon tax relative to treating the world as a single aggregate consumer. In Chapter 3 I analyse climate policy from the point of view of intergenerational transfers. I propose a system of transfers that allows future generations to compensate the current one for its mitigation effort and demonstrate the effects in an OLG model. When the marginal benefit to a - possibly distant - future generation is greater than the cost of compensating the current generation for its abatement effort, a Pareto improvement is possible by a combination of mitigation policy and transfer payments. I show that under very general assumptions the business-as-usual outcome is Pareto dominated by such policies and derive the conditions for the set of climate policies that are not dominated thus.
349

Risk and admissibility for a Weibull class of distributions

Negash, Efrem Ocubamicael 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: The Bayesian approach to decision-making is considered in this thesis for reliability/survival models pertaining to a Weibull class of distributions. A generalised right censored sampling scheme has been assumed and implemented. The Jeffreys' prior for the inverse mean lifetime and the survival function of the exponential model were derived. The consequent posterior distributions of these two parameters were obtained using this non-informative prior. In addition to the Jeffreys' prior, the natural conjugate prior was considered as a prior for the parameter of the exponential model and the consequent posterior distribution was derived. In many reliability problems, overestimating a certain parameter of interest is more detrimental than underestimating it and hence, the LINEX loss function was used to estimate the parameters and their consequent risk measures. Moreover, the same analogous derivations have been carried out relative to the commonly-used symmetrical squared error loss function. The risk function, the posterior risk and the integrated risk of the estimators were obtained and are regarded in this thesis as the risk measures. The performance of the estimators have been compared relative to these risk measures. For the Jeffreys' prior under the squared error loss function, the comparison resulted in crossing-over risk functions and hence, none of these estimators are completely admissible. However, relative to the LINEX loss function, it was found that a correct Bayesian estimator outperforms an incorrectly chosen alternative. On the other hand for the conjugate prior, crossing-over of the risk functions of the estimators were evident as a result. In comparing the performance of the Bayesian estimators, whenever closed-form expressions of the risk measures do not exist, numerical techniques such as Monte Carlo procedures were used. In similar fashion were the posterior risks and integrated risks used in the performance compansons. The Weibull pdf, with its scale and shape parameter, was also considered as a reliability model. The Jeffreys' prior and the consequent posterior distribution of the scale parameter of the Weibull model have also been derived when the shape parameter is known. In this case, the estimation process of the scale parameter is analogous to the exponential model. For the case when both parameters of the Weibull model are unknown, the Jeffreys' and the reference priors have been derived and the computational difficulty of the posterior analysis has been outlined. The Jeffreys' prior for the survival function of the Weibull model has also been derived, when the shape parameter is known. In all cases, two forms of the scalar estimation error have been t:. used to compare as much risk measures as possible. The performance of the estimators were compared for acceptability in a decision-making framework. This can be seen as a type of procedure that addresses robustness of an estimator relative to a chosen loss function. / AFRIKAANSE OPSOMMING: Die Bayes-benadering tot besluitneming is in hierdie tesis beskou vir betroubaarheids- / oorlewingsmodelle wat behoort tot 'n Weibull klas van verdelings. 'n Veralgemene regs gesensoreerde steekproefnemingsplan is aanvaar en geïmplementeer. Die Jeffreyse prior vir die inverse van die gemiddelde leeftyd en die oorlewingsfunksie is afgelei vir die eksponensiële model. Die gevolglike aposteriori-verdeling van hierdie twee parameters is afgelei, indien hierdie nie-inligtingge-wende apriori gebruik word. Addisioneel tot die Jeffreyse prior, is die natuurlike toegevoegde prior beskou vir die parameter van die eksponensiële model en ooreenstemmende aposteriori-verdeling is afgelei. In baie betroubaarheidsprobleme het die oorberaming van 'n parameter meer ernstige nagevolge as die onderberaming daarvan en omgekeerd en gevolglik is die LINEX verliesfunksie gebruik om die parameters te beraam tesame met ooreenstemmende risiko maatstawwe. Soortgelyke afleidings is gedoen vir hierdie algemene simmetriese kwadratiese verliesfunksie. Die risiko funksie, die aposteriori-risiko en die integreerde risiko van die beramers is verkry en word in hierdie tesis beskou as die risiko maatstawwe. Die gedrag van die beramers is vergelyk relatief tot hierdie risiko maatstawwe. Die vergelyking vir die Jeffreyse prior onder kwadratiese verliesfunksie het op oorkruisbare risiko funksies uitgevloei en gevolglik is geeneen van hierdie beramers volkome toelaatbaar nie. Relatief tot die LINEX verliesfunksie is egter gevind dat die korrekte Bayes-beramer beter vaar as die alternatiewe beramer. Aan die ander kant is gevind dat oorkruisbare risiko funksies van die beramers verkry word vir die toegevoegde apriori-verdeling. Met hierdie gedragsvergelykings van die beramers word numeriese tegnieke toegepas, soos die Monte Carlo prosedures, indien die maatstawwe nie in geslote vorm gevind kan word nie. Op soortgelyke wyse is die aposteriori-risiko en die integreerde risiko's gebruik in die gedragsvergelykings. Die Weibull waarskynlikheidsverdeling, met skaal- en vormingsparameter, is ook beskou as 'n betroubaarheidsmodel. Die Jeffreyse prior en die gevolglike aposteriori-verdeling van die skaalparameter van die Weibull model is afgelei, indien die vormingsparameter bekend is. In hierdie geval is die beramingsproses van die skaalparameter analoog aan die afleidings van die eksponensiële model. Indien beide parameters van die Weibull modelonbekend is, is die Jeffreyse prior en die verwysingsprior afgelei en is daarop gewys wat die berekeningskomplikasies is van 'n aposteriori-analise. Die Jeffreyse prior vir die oorlewingsfunksie van die Weibull model is ook afgelei, indien die vormingsparameter bekend is. In al die gevalle is twee vorms van die skalaar beramingsfoute gebruik in die vergelykings, sodat soveel as moontlik risiko maatstawwe vergelyk kan word. Die gedrag van die beramers is vergelyk vir aanvaarbaarheid binne die besluitnemingsraamwerk. Hierdie kan gesien word as 'n prosedure om die robuustheid van 'n beramer relatief tot 'n gekose verliesfunksie aan te spreek.
350

A Bayesian approach to modelling field data on multi-species predator prey-interactions

Asseburg, Christian January 2006 (has links)
Multi-species functional response models are required to model the predation of generalist preda- tors, which consume more than one prey species. In chapter 2, a new model for the multi-species functional response is presented. This model can describe generalist predators that exhibit func- tional responses of Holling type II to some of their prey and of type III to other prey. In chapter 3, I review some of the theoretical distinctions between Bayesian and frequentist statistics and show how Bayesian statistics are particularly well-suited for the fitting of functional response models because uncertainty can be represented comprehensively. In chapters 4 and 5, the multi- species functional response model is fitted to field data on two generalist predators: the hen harrier Circus cyaneus and the harp seal Phoca groenlandica. I am not aware of any previous Bayesian model of the multi-species functional response that has been fitted to field data. The hen harrier's functional response fitted in chapter 4 is strongly sigmoidal to the densities of red grouse Lagopus lagopus scoticus, but no type III shape was detected in the response to the two main prey species, field vole Microtus agrestis and meadow pipit Anthus pratensis. The impact of using Bayesian or frequentist models on the resulting functional response is discussed. In chapter 5, no functional response could be fitted to the data on harp seal predation. Possible reasons are discussed, including poor data quality or a lack of relevance of the available data for informing a behavioural functional response model. I conclude with a comparison of the role that functional responses play in behavioural, population and community ecology and emphasise the need for further research into unifying these different approaches to understanding predation with particular reference to predator movement. In an appendix, I evaluate the possibility of using a functional response for inferring the abun- dances of prey species from performance indicators of generalist predators feeding on these prey. I argue that this approach may be futile in general, because a generalist predator's energy intake does not depend on the density of any single of its prey, so that the possibly unknown densities of all prey need to be taken into account.

Page generated in 0.0752 seconds