• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 259
  • 118
  • 22
  • 15
  • 9
  • 8
  • 8
  • 6
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 605
  • 605
  • 169
  • 113
  • 101
  • 91
  • 68
  • 61
  • 60
  • 58
  • 54
  • 50
  • 49
  • 48
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Carnap, Tarski, and Quine's Year Together: Logic, Science, and Mathematics

Frost-Arnold, Gregory 28 September 2006 (has links)
During the academic year 1940-1941, several giants of analytic philosophy congregated at Harvard: Russell, Tarski, Carnap, Quine, Hempel, and Goodman were all in residence. This group held both regular public meetings as well as private conversations. Carnap took detailed diction notes that give us an extensive record of the discussions at Harvard that year. Surprisingly, the most prominent question in these discussions is: if the number of physical items in the universe is finite (or possibly finite), what form should the logic and mathematics in science take? This question is closely connected to an abiding philosophical problem, one that is of central philosophical importance to the logical empiricists: what is the relationship between the logico-mathematical realm and the natural, material realm? This problem continues to be central to analytic philosophy of logic, mathematics, and science. My dissertation focuses on three issues connected with this problem that dominate the Harvard discussions: nominalism, the unity of science, and analyticity. I both reconstruct the lines of argument represented in Harvard discussions and relate them to contemporary treatments of these issues.
32

Haags Theorem and the Interpretation of Quantum Field Theories with Interactions

Fraser, Doreen Lynn 28 September 2006 (has links)
Quantum field theory (QFT) is the physical framework that integrates quantum mechanics and the special theory of relativity; it is the basis of many of our best physical theories. QFTs for interacting systems have yielded extraordinarily accurate predictions. Yet, in spite of unquestionable empirical success, the treatment of interactions in QFT raises serious issues for the foundations and interpretation of the theory. This dissertation takes Haags theorem as a starting point for investigating these issues. It begins with a detailed exposition and analysis of different versions of Haags theorem. The theorem is cast as a reductio ad absurdum of canonical QFT prior to renormalization. It is possible to adopt different strategies in response to this reductio: (1) renormalizing the canonical framework; (2) introducing a volume i.e., long-distance) cutoff into the canonical framework; or (3) abandoning another assumption common to the canonical framework and Haags theorem, which is the approach adopted by axiomatic and constructive field theorists. Haags theorem does not entail that it is impossible to formulate a mathematically well-defined Hilbert space model for an interacting system on infinite, continuous space. Furthermore, Haags theorem does not undermine the predictions of renormalized canonical QFT; canonical QFT with cutoffs and existing mathematically rigorous models for interactions are empirically equivalent to renormalized canonical QFT. The final two chapters explore the consequences of Haags theorem for the interpretation of QFT with interactions. I argue that no mathematically rigorous model of QFT on infinite, continuous space admits an interpretation in terms of quanta (i.e., quantum particles). Furthermore, I contend that extant mathematically rigorous models for physically unrealistic interactions serve as a better guide to the ontology of QFT than either of the other two formulations of QFT. Consequently, according to QFT, quanta do not belong in our ontology of fundamental entities.
33

A Theory of Conceptual Advance: Explaining Conceptual Change in Evolutionary, Molecular, and Evolutionary Developmental Biology

Brigandt, Ingo 20 September 2006 (has links)
The theory of concepts advanced in the dissertation aims at accounting for a) how a concept makes successful practice possible, and b) how a scientific concept can be subject to rational change in the course of history. Traditional accounts in the philosophy of science have usually studied concepts in terms only of their reference; their concern is to establish a stability of reference in order to address the incommensurability problem. My discussion, in contrast, suggests that each scientific concept consists of three components of content: 1) reference, 2) inferential role, and 3) the epistemic goal pursued with the concept's use. I argue that in the course of history a concept can change in any of these three components, and that change in one componentincluding change of referencecan be accounted for as being rational relative to other components, in particular a concept's epistemic goal. This semantic framework is applied to two cases from the history of biology: the homology concept as used in 19th and 20th century biology, and the gene concept as used in different parts of the 20th century. The homology case study argues that the advent of Darwinian evolutionary theory, despite introducing a new definition of homology, did not bring about a new homology concept (distinct from the pre-Darwinian concept) in the 19th century. Nowadays, however, distinct homology concepts are used in systematics/evolutionary biology, in evolutionary developmental biology, and in molecular biology. The emergence of these different homology concepts is explained as occurring in a rational fashion. The gene case study argues that conceptual progress occurred with the transition from the classical to the molecular gene concept, despite a change in reference. In the last two decades, change occurred internal to the molecular gene concept, so that nowadays this concept's usage and reference varies from context to context. I argue that this situation emerged rationally and that the current variation in usage and reference is conducive to biological practice. The dissertation uses ideas and methodological tools from the philosophy of mind and language, the philosophy of science, the history of science, and the psychology of concepts.
34

Descartes' Theory of Passions

Franco, Abel Benjamin 20 September 2006 (has links)
Descartes not only had a theory of passions, but one that deserves a place among contemporary debates on emotions. The structure of this dissertation attempts to make explicit the unity of that theory. The study of the passions by the physicien (who not only studies matter and motion but also human nature) [Chapter 2] appears to be the foundations (as he tells Chanut) of morals [Chapters 1 and 4] insofar as their main function [Chapter 3] is to dispose us to act in ways which directly affect our natural happiness. In other words, Descartes is in the Passions of the Soul (1649) climbing the very tree of philosophy he presented two years earlier in the Preface to French Edition of the Principles of Philosophy: the trunk (in this case a section of it: our nature) leads us to the highest of the three branches (morals) when we study human passions. Human passions constitute the only function of the mind-body union that can guide us in the pursuit of our (natural) happiness. They do this (1) by informing the soul about the current state of perfection both of the body and, most importantly, of the mind-body union; (2) by discriminating what is relevant in the world regarding our perfection; and (3) by proposing (to the will) possible ways of action (i.e. by disposing us to act). The virtuous (the generous) are those who have achieved contentment not by impeding the arousal of their passions but by living them according to reason, that is, by following freely the dispositions to act (brought about by them) which can increase our perfectioni.e. the disposition to join true goods and to avoid true evils. Regarding current debates on emotions [Chapter 5], Descartes perceptual model not only provides a satisfactory answer to the major challenges faced today both by feeling theories (intentionality) and judgment theories (feelings and the passivity of emotions) but it can also help advance those debates by, on one hand, bringing into them new or neglected ideas, and, on the other, providing a solid overall framework to think about passions.
35

Approaching the Planck Scale from a Generally Relativistic Point of View: A Philosophical Appraisal of Loop Quantum Gravity

Wuthrich, Christian 02 October 2006 (has links)
My dissertation studies the foundations of loop quantum gravity (LQG), a candidate for a quantum theory of gravity based on classical general relativity. At the outset, I discuss two---and I claim separate---questions: first, do we need a quantum theory of gravity at all; and second, if we do, does it follow that gravity should or even must be quantized? My evaluation of different arguments either way suggests that while no argument can be considered conclusive, there are strong indications that gravity should be quantized. LQG attempts a canonical quantization of general relativity and thereby provokes a foundational interest as it must take a stance on many technical issues tightly linked to the interpretation of general relativity. Most importantly, it codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism and how it fares through the quantization process. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level. What happens at the quantum level is an entirely open issue. Known examples of classically singular behaviour which gets regularized by quantization evoke an admittedly pious hope that the singularities which notoriously plague the classical theory may be washed away by quantization. This work scrutinizes pronouncements claiming that the initial singularity of classical cosmological models vanishes in quantum cosmology based on LQG and concludes that these claims must be severely qualified. In particular, I explicate why casting the quantum cosmological models in terms of a deterministic temporal evolution fails to capture the concepts at work adequately. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.
36

Causation, Counterfactual Dependence and Pluralism

Longworth, Francis 29 September 2006 (has links)
The principal concern of this dissertation is whether or not a conceptual analysis of our ordinary concept of causation can be provided. In chapters two and three I show that two of the most promising univocal accounts (the counterfactual theories of Hitchcock and Yablo) are subject to numerous counterexamples. In chapter four, I show that Hall's pluralistic theory of causation, according to which there are two concepts of causation, also faces a number of counterexamples. In chapter five, I sketch an alternative, broadly pluralistic theory of token causation, according to which causation is a 'cluster concept' with a 'prototypical' structure. This theory is able to evade the counterexamples that beset other theories and, in addition, offers an explanation of interesting features of the concept such as the existence of borderline cases, and the fact that some instances of causation seem to be 'better' examples of the concept than others.
37

Causation in the Nature-Nurture Debate: The Case of Genotype-Environment Interaction

Tabery, James Gregory 20 September 2007 (has links)
I attempt to resolve an aspect of the nature-nurture debate. Consider a typical nature-nurture question: Why do some individuals develop a complex trait such as depression, while others do not? This question incorporates an etiological query about the causal mechanisms responsible for the individual development of depression; it also incorporates an etiological query about the causes of variation responsible for individual differences in the occurrence of depression. Scientists in the developmental research tradition of biology investigate the former; scientists in the biometric research tradition of biology investigate the latter. So what is the relationship? The developmental and biometric research traditions, I argue, are united in their joint effort to elucidate what I call difference mechanisms. Difference mechanisms are regular causal mechanisms made up of difference-making variables that take different values in the natural world. On this model, individual differences are the effect of difference-makers in development that take different values in the natural world. I apply this model to the case of genotype-environment interaction (or G×E), showing that there have actually been two separate concepts of G×E: a biometric concept (or G×EB) and a developmental concept (or G×ED). These concepts also may be integrated via the difference mechanisms model: G×E results from the interdependence of difference-makers in development that take different values in the natural world.
38

Equilibrium and Explanation in 18th Century Mechanics

Hepburn, Brian Spence 20 September 2007 (has links)
The received view of the Scientific Revolution is that it was completed with the publication of Isaac Newton's (1642-1727) {em Philosophiae Naturalis Principia Mathematica} in 1687. Work on mechanics in the century or more following was thought to be merely a working out the mathematical details of Newton's program, in particular of translating his mechanics from its synthetic expression into analytic form. I show that the mechanics of Leonhard Euler (1707--1782) and Joseph-Louis Lagrange (1736--1813) did not begin with Newton's Three Laws. They provided their own beginning principles and interpretations of the relation between mathematical description and nature. Functional relations among the quantified properties of bodies were interpreted as basic mechanical connections between those bodies. Equilibrium played an important role in explaining the behavior of physical systems understood mechanically. Some behavior was revealed to be an equilibrium condition; other behavior was understood as a variation from equilibrium. Implications for scientific explanation are then drawn from these historical considerations, specifically an alternative account of mechanical explanation and unification. Trying to cast mechanical explanations (of the kind considered here) as Kitcher-style argument schema fails to distinguish legitimate from spurious explanations. Consideration of the mechanical analogies lying behind the schema are required.
39

Reliability and Validity of Experiment in the Neurobiology of Learning and Memory

Sullivan, Jacqueline A. 27 September 2007 (has links)
The concept of reliability has been defined traditionally by philosophers of science as a feature that an experiment has when it can be used to arrive at true descriptive or explanatory claims about phenomena. In contrast, philosophers of science typically take the concept of validity to correspond roughly to that of generalizability, which is defined as a feature that a descriptive or explanatory claim has when it is based on laboratory data but is applicable to phenomena beyond those effectsunder study in the laboratory. Philosophical accounts of experiment typically treat of the reliability of scientific experiment and the validity of descriptive or explanatory claims independently. On my account of experiment, however, these two issues are intimately linked. I show by appeal to case studies from the contemporary neurobiology of learning and memory that measures taken to guarantee the reliability of experiment often result in a decrease in the validity of those scientific claims that are made on the basis of such experiments and, furthermore, that strategies employed to increase validity often decrease reliability. Yet, since reliability and validity are both desirable goals of scientific experiments, and, on my account, competing aims, a tension ensues. I focus on two types of neurobiological experiments as case studies to illustrate this tension: (1) organism-level learning experiments and (2) synaptic-level plasticity experiments. I argue that the express commitment to the reliability of experimental processes in neurobiology has resulted in the invalidity of mechanistic claims about learning and plasticity made on the basis of data obtained from such experiments. The positive component of the dissertation consists in specific proposals that I offer as guidelines for resolving this tension in the context of experimental design.
40

The Unity of Science in Early-Modern Philosophy: Subalternation, Metaphysics and the Geometrical Manner in Scholasticism, Galileo and Descartes

Biener, Zvi 10 June 2008 (has links)
The project of constructing a complete system of knowledge---a system capable of integrating all that is and could possibly be known---was common to many early-modern philosophers and was championed with particular alacrity by René Descartes. The inspiration for this project often came from mathematics in general and from geometry in particular: Just as propositions were ordered in a geometrical demonstration, the argument went, so should propositions be ordered in an overall system of knowledge. Science, it was thought, had to proceed `more geometrico'. I offer a new interpretation of `science emph{more geometrico}' based on an analysis of the explanatory forms used in certain branches of geometry. These branches were optics, astronomy, and mechanics; the so-called subalternate, subordinate, or mixed-mathematical sciences. In Part I, I investigate the nature of the mixed-mathematical sciences according to Aristotle and some `liberal Jesuit' scholastic-Aristotelians. In Part II, the heart of the work, I analyze the metaphysics and physics of Descartes' "Principles of Philosophy" (1644, 1647) in light of the findings of Part I and an example from Galileo. I conclude by arguing that we must broaden our understanding of the early-modern conception of `science more geometrico' to include concepts taken from the mixed-mathematical sciences. These render the geometrical manner more flexible than previously thought.

Page generated in 0.0622 seconds