• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Animating the EPR-Experiment: Reasoning from Error in the Search for Bell Violations

Vasudevan, Anubav 11 January 2005 (has links)
When faced with Duhemian problems of underdetermination, scientific method suggests neither a circumvention of such difficulties via the uncritical acceptance of background assumptions, nor the employment of epistemically unsatisfying subjectivist models of rational retainment. Instead, scientists are challenged to attack problems of underdetermination 'head-on', through a careful analysis of the severity of the testing procedures responsible for the production and modeling of their anomalous data. Researchers faced with the task of explaining empirical anomalies, employ a number of diverse and clever experimental techniques designed to cut through the Duhemian mists, and account for potential sources of error that might weaken an otherwise warranted inference. In lieu of such progressive experimental procedures, scientists try to identify the actual inferential work that an existing experiment is capable of providing so as to avoid ascribing to its output more discriminative power than it is rightfully due. We argue that the various strategies adopted by researchers involved in the testing of Bell's inequality, are well represented by Mayo's error-statistical notion of scientific evidence. In particular, an acceptance of her stringent demand for the output of severe tests to stand at the basis of rational inference, helps to explain the methodological reactions expressed by scientists in response to the loopholes that plagued the early Bell experiments performed by Alain Aspect et al.. At the same time, we argue as a counterpoint, that these very reactions present a challenge for 'top-down' approaches to Duhem's problem. / Master of Arts
2

Toward Error-Statistical Principles of Evidence in Statistical Inference

Jinn, Nicole Mee-Hyaang 02 June 2014 (has links)
The context for this research is statistical inference, the process of making predictions or inferences about a population from observation and analyses of a sample. In this context, many researchers want to grasp what inferences can be made that are valid, in the sense of being able to uphold or justify by argument or evidence. Another pressing question among users of statistical methods is: how can spurious relationships be distinguished from genuine ones? Underlying both of these issues is the concept of evidence. In response to these (and similar) questions, two questions I work on in this essay are: (1) what is a genuine principle of evidence? and (2) do error probabilities have more than a long-run role? Concisely, I propose that felicitous genuine principles of evidence should provide concrete guidelines on precisely how to examine error probabilities, with respect to a test's aptitude for unmasking pertinent errors, which leads to establishing sound interpretations of results from statistical techniques. The starting point for my definition of genuine principles of evidence is Allan Birnbaum's confidence concept, an attempt to control misleading interpretations. However, Birnbaum's confidence concept is inadequate for interpreting statistical evidence, because using only pre-data error probabilities would not pick up on a test's ability to detect a discrepancy of interest (e.g., "even if the discrepancy exists" with respect to the actual outcome. Instead, I argue that Deborah Mayo's severity assessment is the most suitable characterization of evidence based on my definition of genuine principles of evidence. / Master of Arts
3

Naturalism & Objectivity: Methods and Meta-methods

Miller, Jean Anne 19 August 2011 (has links)
The error statistical account provides a basic account of evidence and inference. Formally, the approach is a re-interpretation of standard frequentist (Fisherian, Neyman-Pearson) statistics. Informally, it gives an account of inductive inference based on arguing from error, an analog of frequentist statistics, which keeps the concept of error probabilities central to the evaluation of inferences and evidence. Error statistical work at present tends to remain distinct from other approaches of naturalism and social epistemology in philosophy of science and, more generally, Science and Technology Studies (STS). My goal is to employ the error statistical program in order to address a number of problems to approaches in philosophy of science, which fall under two broad headings: (1) naturalistic philosophy of science and (2) social epistemology. The naturalistic approaches that I am interested in looking at seek to provide us with an account of scientific and meta-scientific methodologies that will avoid extreme skepticism, relativism and subjectivity and claim to teach us something about scientific inferences and evidence produced by experiments (broadly construed). I argue that these accounts fail to identify a satisfactory program for achieving those goals and; moreover, to the extent that they succeed it is by latching on to the more general principles and arguments from error statistics. In sum, I will apply the basic ideas from error statistics and use them to examine (and improve upon) an area to which they have not yet been applied, namely in assessing and pushing forward these interdisciplinary pursuits involving naturalistic philosophies of science that appeal to cognitive science, psychology, the scientific record and a variety of social epistemologies. / Ph. D.

Page generated in 0.0949 seconds