Spelling suggestions: "subject:"none bayesian."" "subject:"none eayesian.""
91 |
Some Theory and Applications of Probability in Quantum MechanicsFerrie, Christopher January 2012 (has links)
This thesis investigates three distinct facets of the theory of quantum information. The first two, quantum state estimation and quantum process estimation, are closely related and deal with the question of how to estimate the classical parameters in a quantum mechanical model. The third attempts to bring quantum theory as close as possible to classical theory through the formalism of quasi-probability.
Building a large scale quantum information processor is a significant challenge. First, we require an accurate characterization of the dynamics experienced by the device to allow for the application of error correcting codes and other tools for implementing useful quantum algorithms. The necessary scaling of computational resources needed to characterize a quantum system as a function of the number of subsystems is by now a well studied problem (the scaling is generally exponential). However, irrespective of the computational resources necessary to just write-down a classical description of a quantum state, we can ask about the experimental resources necessary to obtain data (measurement complexity) and the computational resources necessary to generate such a characterization (estimation complexity). These problems are studied here and approached from two directions.
The first problem we address is that of quantum state estimation. We apply high-level decision theoretic principles (applied in classical problems such as, for example, universal data compression) to the estimation of a qubit state. We prove that quantum states are more difficult to estimate than their classical counterparts by finding optimal estimation strategies. These strategies, requiring the solution to a difficult optimization problem, are difficult to implement in practise. Fortunately, we find estimation algorithms which come close to optimal but require far fewer resources to compute. Finally, we provide a classical analog of this quantum mechanical problem which reproduces, and gives intuitive explanations for, many of its features, such as why adaptive tomography can quadratically reduce its difficulty.
The second method for practical characterization of quantum devices takes is applied to the problem of quantum process estimation. This differs from the above analysis in two ways: (1) we apply strong restrictions on knowledge of various estimation and control parameters (the former making the problem easier, the latter making it harder); and (2) we consider the problem of designing future experiments based on the outcomes of past experiments. We show in test cases that adaptive protocols can exponentially outperform their off-line counterparts. Moreover, we adapt machine learning algorithms to the problem which bring these experimental design methodologies to realm of experimental feasibility.
In the final chapter we move away from estimation problems to show formally that a classical representation of quantum theory is not tenable. This intuitive conclusion is formally borne out through the connection to quasi-probability -- where it is equivalent to the necessity of negative probability in all such representations of quantum theory. In particular, we generalize previous no-go theorems to arbitrary classical representations of quantum systems of arbitrary dimension. We also discuss recent progress in the program to identify quantum resources for subtheories of quantum theory and operational restrictions motivated by quantum computation.
|
92 |
14Cベイズ解析と較正解析ソフトOxCalの日本語版についてNAKAMURA, Toshio, NISHIMOTO, Hiroshi, OMORI, Takayuki, 中村, 俊夫, 西本, 寛, 大森, 貴之 03 1900 (has links)
第23回名古屋大学年代測定総合研究センターシンポジウム平成22(2010)年度報告
|
93 |
Nonparametric Bayesian analysis of some clustering problemsRay, Shubhankar 30 October 2006 (has links)
Nonparametric Bayesian models have been researched extensively in the past 10 years
following the work of Escobar and West (1995) on sampling schemes for Dirichlet processes.
The infinite mixture representation of the Dirichlet process makes it useful
for clustering problems where the number of clusters is unknown. We develop nonparametric
Bayesian models for two different clustering problems, namely functional
and graphical clustering.
We propose a nonparametric Bayes wavelet model for clustering of functional or
longitudinal data. The wavelet modelling is aimed at the resolution of global and
local features during clustering. The model also allows the elicitation of prior belief
about the regularity of the functions and has the ability to adapt to a wide range
of functional regularity. Posterior inference is carried out by Gibbs sampling with
conjugate priors for fast computation. We use simulated as well as real datasets to
illustrate the suitability of the approach over other alternatives.
The functional clustering model is extended to analyze splice microarray data.
New microarray technologies probe consecutive segments along genes to observe alternative
splicing (AS) mechanisms that produce multiple proteins from a single gene.
Clues regarding the number of splice forms can be obtained by clustering the functional
expression profiles from different tissues. The analysis was carried out on the Rosetta dataset (Johnson et al., 2003) to obtain a splice variant by tissue distribution
for all the 10,000 genes. We were able to identify a number of splice forms that appear
to be unique to cancer.
We propose a Bayesian model for partitioning graphs depicting dependencies
in a collection of objects. After suitable transformations and modelling techniques,
the problem of graph cutting can be approached by nonparametric Bayes clustering.
We draw motivation from a recent work (Dhillon, 2001) showing the equivalence of
kernel k-means clustering and certain graph cutting algorithms. It is shown that
loss functions similar to the kernel k-means naturally arise in this model, and the
minimization of associated posterior risk comprises an effective graph cutting strategy.
We present here results from the analysis of two microarray datasets, namely the
melanoma dataset (Bittner et al., 2000) and the sarcoma dataset (Nykter et al.,
2006).
|
94 |
Uncertainty quantification of volumetric and material balance analysis of gas reservoirs with water influx using a Bayesian frameworkAprilia, Asti Wulandari 25 April 2007 (has links)
Accurately estimating hydrocarbon reserves is important, because it affects every phase
of the oil and gas business. Unfortunately, reserves estimation is always uncertain, since
perfect information is seldom available from the reservoir, and uncertainty can
complicate the decision-making process. Many important decisions have to be made
without knowing exactly what the ultimate outcome will be from a decision made today.
Thus, quantifying the uncertainty is extremely important.
Two methods for estimating original hydrocarbons in place (OHIP) are volumetric and
material balance methods. The volumetric method is convenient to calculate OHIP
during the early development period, while the material balance method can be used
later, after performance data, such as pressure and production data, are available.
In this work, I propose a methodology for using a Bayesian approach to quantify the
uncertainty of original gas in place (G), aquifer productivity index (J), and the volume of
the aquifer (Wi) as a result of combining volumetric and material balance analysis in a
water-driven gas reservoir.
The results show that we potentially have significant non-uniqueness (i.e., large
uncertainty) when we consider only volumetric analyses or material balance analyses. By combining the results from both analyses, the non-uniqueness can be reduced,
resulting in OGIP and aquifer parameter estimates with lower uncertainty. By
understanding the uncertainty, we can expect better management decision making.
|
95 |
Models for heterogeneous variable selectionGilbride, Timothy J., January 2004 (has links)
Thesis (Ph. D.)--Ohio State University, 2004. / Title from first page of PDF file. Document formatted into pages; contains xii, 138 p.; also includes graphics. Includes abstract and vita. Advisor: Greg M. Allenby, Dept. of Business Admnistration. Includes bibliographical references (p. 134-138).
|
96 |
Bayesian scientific methodology : a naturalistic approach /Yeo, Yeongseo, January 2002 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2002. / Typescript. Vita. Includes bibliographical references (leaves 192-195). Also available on the Internet.
|
97 |
Bayesian scientific methodology a naturalistic approach /Yeo, Yeongseo, January 2002 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2002. / Typescript. Vita. Includes bibliographical references (leaves 192-195). Also available on the Internet.
|
98 |
Bayesian hierarchical parametric survival analysis for NBA career longevityLakin, Richard Thomas 21 August 2012 (has links)
In evaluating a prospective NBA player, one might consider past performance in the player’s previous years of competition. In doing so, a general manager may ask the following questions: Do certain characteristics of a player’s past statistics play a role in how long a player will last in the NBA? In this study, we examine the data from players who entered in the NBA in a five-‐year period (1997-‐1998 through 2001-‐2002 season) by looking at their attributes from their collegiate career to see if they have any effect on their career longevity. We will look at basic statistics take for each of these players, such as field goal percentage, points per game, rebounds per game and assists per game. We aim to use Bayesian survival methods to model these event times, while exploiting the hierarchical nature of the data. We will look at two types of models and perform model diagnostics to determine which of the two we prefer. / text
|
99 |
Applied Bayesian inference : natural language modelling and visual feature trackingScheffler, Carl January 2012 (has links)
No description available.
|
100 |
Empirical Bayes estimation of small area proportionsFarrell, Patrick John January 1991 (has links)
Due to the nature of survey design, the estimation of parameters associated with small areas is extremely problematic. In this study, techniques for the estimation of small area proportions are proposed and implemented. More specifically, empirical Bayes estimation methodologies, where random effects which reflect the complex structure of a multi-stage sample design are incorporated into logistic regression models, are derived and studied. / The proposed techniques are applied to data from the 1950 United States Census to predict local labor force participation rates of females. Results are compared with those obtained using unbiased and synthetic estimation approaches. / Using the proposed methodologies, a sensitivity analysis concerning the prior distribution assumption, conducted with a view toward outlier detection, is performed. The use of bootstrap techniques to correct measures of uncertainty is also studied.
|
Page generated in 0.0642 seconds