Spelling suggestions: "subject:"inference"" "subject:"lnference""
241 |
Can Induction Strengthen Inference to the Best Explanation?Thomson, Neil A. January 2008 (has links)
In this paper I will argue that the controversial process of inferring to the best explanation (IBE) can be made more coherent if its formulation recognizes and includes a significant inductive component. To do so, I will examine the relationship between Harman’s, Lipton’s, and Fumerton’s positions on IBE, settling ultimately upon a conception that categorically rejects Harman’s account while appropriating potions of both Lipton’s and Fumerton’s accounts. The resulting formulation will be called inductive-IBE, and I will argue that this formulation more accurately describes the inferential practices employed in scientific inquiry. The upshot of my argument, that IBE contains a significant inductive component, will be that any conclusion born from such types of inductive inference must be, at best, likely, and not a necessity. And, although previous accounts of IBE have accepted the defeasibility of IBE, I will argue that inductive-IBE is more descriptive because it tells us why this fallibility exists. That is, although the Liptonian conception of IBE acknowledges that IBE is fallible, my account specifically addresses this characteristic and, thus, is more descriptive and informative in this regard. I will use inductive-IBE to argue, contra van Fraassen, that IBE can be a legitimate form of inference that leads science to true theories and real entities.
|
242 |
Bayesian inference for source determination in the atmospheric environmentKeats, William Andrew January 2009 (has links)
In the event of a hazardous release (chemical, biological, or radiological) in an urban environment, monitoring agencies must have the tools to locate and characterize the source of the emission in order to respond and minimize damage. Given a finite and noisy set of concentration measurements, determining the source location, strength and time of release is an ill-posed inverse problem. We treat this problem using Bayesian inference, a framework under which uncertainties in modelled and measured concentrations can be propagated, in a consistent, rigorous manner, toward a final probabilistic estimate for the source.
The Bayesian methodology operates independently of the chosen dispersion model, meaning it can be applied equally well to problems in urban environments, at regional scales, or at global scales. Both Lagrangian stochastic (particle-tracking) and Eulerian (fixed-grid, finite-volume) dispersion models have been used successfully. Calculations are accomplished efficiently by using adjoint (backward) dispersion models, which reduces the computational effort required from calculating one [forward] plume per possible source configuration to calculating one [backward] plume per detector. Markov chain Monte Carlo (MCMC) is used to efficiently sample from the posterior distribution for the source parameters; both the Metropolis-Hastings and hybrid Hamiltonian algorithms are used.
In this thesis, four applications falling under the rubric of source determination are addressed: dispersion in highly disturbed flow fields characteristic of built-up (urban) environments; dispersion of a nonconservative scalar over flat terrain in a statistically stationary and horizontally homogeneous (turbulent) wind field; optimal placement of an auxiliary detector using a decision-theoretic approach; and source apportionment of particulate matter (PM) using a chemical mass balance (CMB) receptor model. For the first application, the data sets used to validate the proposed methodology include a water-channel simulation of the near-field dispersion of contaminant plumes in a large array of building-like obstacles (Mock Urban Setting Trial) and a full-scale field experiment (Joint Urban 2003) in Oklahoma City. For the second and third applications, the background wind and terrain conditions are based on those encountered during the Project Prairie Grass field experiment; mean concentration and turbulent scalar flux data are synthesized using a Lagrangian stochastic model where necessary. In the fourth and final application, Bayesian source apportionment results are compared to the US Environmental Protection Agency's standard CMB model using a test case involving PM data from Fresno, California. For each of the applications addressed in this thesis, combining Bayesian inference with appropriate computational techniques results in a computationally efficient methodology for performing source determination.
|
243 |
Causal assumptions : some responses to Nancy CartwrightKristtorn, Sonje 31 July 2007 (has links)
The theories of causality put forward by Pearl and the Spirtes-Glymour-Scheines group have entered the mainstream of statistical thinking. These theories show that under ideal conditions, causal relationships can be inferred from purely statistical observational data. Nancy Cartwright advances certain arguments against these causal inference algorithms: the well-known factory example argument against the Causal Markov condition and an argument against faithfulness. We point to the dependence of the first argument on undefined categories external to the technical apparatus of causal inference algorithms. We acknowledge the possible practical implication of her second argument, yet we maintain, with respect to both arguments, that this variety of causal inference, if not universal, is nonetheless eminently useful. Cartwright argues against assumptions that are essential not only to causal inference algorithms but to causal inference generally, even if, as she contends, they are not without exception and that the same is true of other, likewise essential, assumptions. We indicate that causal inference is an iterative process and that causal inference algorithms assist, rather than replace, that process as performed by human beings.
|
244 |
Bayesian Mixture Modeling Approaches for Intermediate Variables and Causal InferenceSchwartz, Scott Lee January 2010 (has links)
<p>This thesis examines causal inference related topics involving intermediate variables, and uses Bayesian methodologies to advance analysis capabilities in these areas. First, joint modeling of outcome variables with intermediate variables is considered in the context of birthweight and censored gestational age analyses. The proposed methodology provides improved inference capabilities for birthweight and gestational age, avoids post-treatment selection bias problems associated with conditional on gestational age analyses, and appropriately assesses the uncertainty associated with censored gestational age. Second, principal stratification methodology for settings where causal inference analysis requires appropriate adjustment of intermediate variables is extended to observational settings with binary treatments and binary intermediate variables. This is done by uncovering the structural pathways of unmeasured confounding affecting principal stratification analysis and directly incorporating them into a model based sensitivity analysis methodology. Demonstration focuses on a study of the efficacy of influenza vaccination in elderly populations. Third, flexibility, interpretability, and capability of principal stratification analyses for continuous intermediate variables are improved by replacing the current fully parametric methodologies with semiparametric Bayesian alternatives. This presentation is one of the first uses of nonparametric techniques in causal inference analysis,</p><p>and opens a connection between these two fields. Demonstration focuses on two studies, one involving a cholesterol reduction drug, and one examine the effect of physical activity on cardiovascular disease as it relates to body mass index.</p> / Dissertation
|
245 |
THE PERSISTENCE OF INFERENCES IN MEMORY FOR YOUNGER AND OLDER ADULTSGuillory, Jimmeka J. 2009 May 1900 (has links)
Younger and older adults’ susceptibility to the continued influence of inferences
in memory was examined using a paradigm implemented by Wilkes and Leatherbarrow.
Research has shown that younger adults have difficulty forgetting inferences they make
after reading a passage, even if the information that the inferences are based on is later
shown to be untrue. The current study examined the effects of these inferences on
memory in the lab and tested whether older adults, like younger adults, are influenced by
the lingering effects of these false inferences. In addition, this study examined the nature
of these inferences, by examining younger and older adults’ subjective experiences and
confidence associated with factual recall and incorrect inference recall. Results showed
that younger and older adults are equally susceptible to the continued influence of
inferences. Both younger and older adults gave primarily remember judgments to factual
questions but primarily believe judgments to inference questions. This is an important
finding because it demonstrates that people may go against what they remember or know
occurred because of a lingering belief that the information might still be true. Also, the
finding that participants do actually give more believe responses to inference questions is
important because it demonstrates that there is a third state of awareness that people will readily use when making inferences. Participants were also more confident when making
remember and know judgments compared to believe judgments. This is an interesting
finding because it supports the theory that both remember and know judgments can be
associated with high confidence.
|
246 |
Essays on Efficiency AnalysisAsava-Vallobh, Norabajra 2009 May 1900 (has links)
This dissertation consists of four essays which investigate efficiency analysis, especially
when non-discretionary inputs exist. A new approach of the multi-stage Data Envelopment
Analysis (DEA) for non-discretionary inputs, statistical inference discussions, and
applications are provided. In the first essay, I propose a multi-stage DEA model to address
the non-discretionary input issue, and provide a simulation analysis that illustrates the
implementation and potential advantages of the new approach relative to the leading existing
multi-stage models of non-discretionary inputs, such as Ruggiero's 1998 model and Fried,
Lovell, Schmidt, and Yaisawarng's 2002 model. Furthermore, the simulation results also
suggest that the constant returns to scale assumption seems to be preferred when
observations have similar sizes, but variable returns to scale may be more appropriate when
their scales are different. In the second essay, I make comments on Simar and Wilson work
of 2007. My simulation evidence shows that traditional statistical inference does not
underperform the bootstrap process proposed by Simar and Wilson. Moreover, my results
also show that the truncated model recommended by Simar and Wilson does not outperform
the tobit model in terms of statistical inference. Therefore, the traditional method, t-test, and
the tobit model should continue to be considered applicable tools for a multi-stage DEA model with non-discretionary inputs, despite contrary claims by Simar and Wilson. The third
essay raises an example of applying my new approach to data from Texas school districts.
The results suggest that a lagged variable (e.g. students' performance in the previous year), a
variable which has been used in the literature, may not play an important role in determining
efficiency scores. This implies that one may not need access to panel data on individual
scores to study school efficiency. My final essay applies a standard DEA model and the
Malmquist productivity index to commercial banks in Thailand in order to compare their
efficiency and productivity before and after Thailand?s Financial Sector Master Plan (FSMP)
that was implemented in 2004.
|
247 |
Memory bias : why we underestimate the duration of future events /Roy, Michael M. January 2003 (has links)
Thesis (Ph. D.)--University of California, San Diego, and San Diego State University, 2003. / Vita. Includes bibliographical references (leaves 98-102).
|
248 |
Optimization in non-parametric survival analysis and climate change modelingTeodorescu, Iuliana 01 January 2013 (has links)
Many of the open problems of current interest in probability and statistics involve complicated data
sets that do not satisfy the strong assumptions of being independent and identically distributed. Often,
the samples are known only empirically, and making assumptions about underlying parametric
distributions is not warranted by the insufficient information available. Under such circumstances,
the usual Fisher or parametric Bayes approaches cannot be used to model the data or make predictions.
However, this situation is quite often encountered in some of the main challenges facing statistical,
data-driven studies of climate change, clinical studies, or financial markets, to name a few.
We propose a novel approach, based on large deviations theory, convex optimization, and recent
results on surrogate loss functions for classifier-type problems, that can be used in order to estimate
the probability of large deviations for complicated data. This may include, for instance, highdimensional
data, highly-correlated data, or very sparse data.
The thesis introduces the new approach, reviews the current known theoretical results, and then
presents a number of numerical explorations meant to quantify how far the approximation of survival
functions via large deviations principle can be taken, once we leave the limitations imposed
by the existing theoretical results.
The explorations are encouraging, indicating that indeed the new approximation scheme may
be very efficient and can be used under much more general conditions than those warranted by the
current theoretical thresholds.
After applying the new methodology to two important contemporary problems (atmospheric
CO2 data and El Ni~no/La Ni~na phenomena), we conclude with a summary outline of possible further
research.
|
249 |
An Evaluation of Clustering and Classification Algorithms in Life-Logging DevicesAmlinger, Anton January 2015 (has links)
Using life-logging devices and wearables is a growing trend in today’s society. These yield vast amounts of information, data that is not directly overseeable or graspable at a glance due to its size. Gathering a qualitative, comprehensible overview over this quantitative information is essential for life-logging services to serve its purpose. This thesis provides an overview comparison of CLARANS, DBSCAN and SLINK, representing different branches of clustering algorithm types, as tools for activity detection in geo-spatial data sets. These activities are then classified using a simple model with model parameters learned via Bayesian inference, as a demonstration of a different branch of clustering. Results are provided using Silhouettes as evaluation for geo-spatial clustering and a user study for the end classification. The results are promising as an outline for a framework of classification and activity detection, and shed lights on various pitfalls that might be encountered during implementation of such service.
|
250 |
Forward and inverse modeling of fire physics towards fire scene reconstructionsOverholt, Kristopher James 06 November 2013 (has links)
Fire models are routinely used to evaluate life safety aspects of building design projects and are being used more often in fire and arson investigations as well as reconstructions of firefighter line-of-duty deaths and injuries. A fire within a compartment effectively leaves behind a record of fire activity and history (i.e., fire signatures). Fire and arson investigators can utilize these fire signatures in the determination of cause and origin during fire reconstruction exercises. Researchers conducting fire experiments can utilize this record of fire activity to better understand the underlying physics. In all of these applications, the fire heat release rate (HRR), location of a fire, and smoke production are important parameters that govern the evolution of thermal conditions within a fire compartment. These input parameters can be a large source of uncertainty in fire models, especially in scenarios in which experimental data or detailed information on fire behavior are not available. To better understand fire behavior indicators related to soot, the deposition of soot onto surfaces was considered. Improvements to a soot deposition submodel were implemented in a computational fluid dynamics (CFD) fire model. To better understand fire behavior indicators related to fire size, an inverse HRR methodology was developed that calculates a transient HRR in a compartment based on measured temperatures resulting from a fire source. To address issues related to the uncertainty of input parameters, an inversion framework was developed that has applications towards fire scene reconstructions. Rather than using point estimates of input parameters, a statistical inversion framework based on the Bayesian inference approach was used to determine probability distributions of input parameters. These probability distributions contain uncertainty information about the input parameters and can be propagated through fire models to obtain uncertainty information about predicted quantities of interest. The Bayesian inference approach was applied to various fire problems and coupled with zone and CFD fire models to extend the physical capability and accuracy of the inversion framework. Example applications include the estimation of both steady-state and transient fire sizes in a compartment, material properties related to pyrolysis, and the location of a fire in a compartment. / text
|
Page generated in 0.0561 seconds