• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 210
  • 118
  • 20
  • 10
  • 9
  • 8
  • 8
  • 5
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 524
  • 524
  • 166
  • 113
  • 101
  • 86
  • 60
  • 59
  • 58
  • 52
  • 49
  • 48
  • 45
  • 44
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Scientific models : a cognitive approach with an application in astrophysics

Bailer-Jones, Daniela M. January 1997 (has links)
No description available.
2

Of clues and causes : a methodological interpretation of origin of life studies

Meyer, Stephen Charles January 1991 (has links)
No description available.
3

The American Grotesque: Free-Thought Idealism in Edward Bliss Foote's "Science in Story"

Tirak, Lita M. 01 January 2010 (has links)
No description available.
4

How to teach an old dog new tricks:quantum information, quantum computing, and the philosophy of physics

Duwell, Armond James 31 January 2005 (has links)
My dissertation consists of two independent parts. Part one of my dissertation examines concepts of quantum information. I clarify three very different concepts of information and assess their implications for understanding quantum mechanics. First I clarify the concept of information due to Shannon, and its relation to physical theories. Using the Shannon concept, I examine two purportedly new concepts of quantum information. I argue that a fundamental philosophical mistake is made regarding these concepts. Advocates of these new concepts do not properly distinguish between the properties of information due to the physical medium it is stored in from the properties of information per se. This distinction is crucial for developing a new concept to help us understand quantum mechanics and evaluating its merits. Part two of my dissertation examines explanations of the efficiency that quantum computers enjoy over classical computers for some computational tasks, and the relationship between explanations of efficiency and interpretations of quantum mechanics. I examine the so-called quantum parallelism thesis, that quantum computers can perform many computations in a single step, a feat thought not to be possible on classical computers. The truth of this thesis is not obvious, and contested by some. I develop a set of general criteria for computation that any computing device must satisfy. I use these criteria to demonstrate that the quantum parallelism thesis is true. As an application of these general criteria for computation I articulate three distinct concepts of parallelism and demonstrate that classical computers can compute in parallel as well. This demonstrates that the truth of the quantum parallelism thesis alone does not provide a complete explanation of the efficiency of quantum computers. I supplement the quantum parallelism thesis to provide a complete explanation. Finally, I address the claim that only the many-worlds interpretation of quantum mechanics can underwrite the truth of the quantum parallelism thesis. The general criteria for computation provide support for the quantum parallelism thesis independent of any interpretation of quantum mechanics.
5

Empiricism and the Epistemic Status of Imaging Technologies

Delehanty, Megan Catherine 04 October 2005 (has links)
This starting point for this project was the question of how to understand the epistemic status of mathematized imaging technologies such as positron emission tomography (PET) and confocal microscopy. These sorts of instruments play an increasingly important role in virtually all areas of biology and medicine. Some of these technologies have been widely celebrated as having revolutionized various fields of studies while others have been the target of substantial criticism. Thus, it is essential that we be able to assess these sorts of technologies as methods of producing evidence. They differ from one another in many respects, but one feature they all have in common is the use of multiple layers of statistical and mathematical processing that are essential to data production. This feature alone means that they do not fit neatly into any standard empiricist account of evidence. Yet this failure to be accommodated by philosophical accounts of good evidence does not indicate a general inadequacy on their part since, by many measures, they very often produce very high quality evidence. In order to understand how they can do so, we must look more closely at old philosophical questions concerning the role of experience and observation in acquiring knowledge about the external world. Doing so leads us to a new, grounded version of empiricism. After distinguishing between a weaker and a stronger, anthropomorphic version of empiricism, I argue that most contemporary accounts of observation are what I call benchmark strategies that, implicitly or explicitly, rely on the stronger version according to which human sense experience holds a place of unique privilege. They attempt to extend the bounds of observation and the epistemic privilege accorded to it by establishing some type of relevant similarity to the benchmark of human perception. These accounts fail because they are unable to establish an epistemically motivated account of what relevant similarity consists of. The last best chance for any benchmark approach, and, indeed, for anthropomorphic empiricism, is to supplement a benchmark strategy with a grounding strategy. Toward this end, I examine the Grounded Benchmark Criterion which defines relevant similarity to human perception to be defined in terms of the reliability-making features of human perception. This account, too, must fail due to our inability to specify these features given the current state of understanding of the human visual system. However, this failure reveals that it is reliability alone that is epistemically relevant, not any other sort of similarity to human perception. Current accounts of reliability suffer from a number of difficulties, so I develop a novel account of reliability that is based on the concept of granularity. My account of reliability in terms of a granularity match both provides the means to refine the weaker version of empiricism and allows us to establish when and why imaging technologies are reliable. Finally, I use this account of granularity is examining the importance of the fact that the output of imaging technologies usually is images.
6

Explicating Emotions

Scarantino, Andrea 10 October 2005 (has links)
In the course of their long intellectual history, emotions have been identified with items as diverse as perceptions of bodily changes (feeling tradition), judgments (cognitivist tradition), behavioral predispositions (behaviorist tradition), biologically based solutions to fundamental life tasks (evolutionary tradition), and culturally specific social artifacts (social constructionist tradition). The first objective of my work is to put some order in the mare magnum of theories of emotions. I taxonomize them into families and explore the historical origin and current credentials of the arguments and intuitions supporting them. I then evaluate the methodology of past and present emotion theory, defending a bleak conclusion: a great many emotion theorists ask What is an emotion? without a clear understanding of what counts as getting the answer right. I argue that there are two ways of getting the answer right. One is to capture the conditions of application of the folk term "emotion" in ordinary language (Folk Emotion Project), and the other is to formulate a fruitful explication of it (Explicating Emotion Project). Once we get clear on the desiderata of these two projects, we realize that several long-running debates in emotion theory are motivated by methodological confusions. The constructive part of my work is devoted to formulating a new explication of emotion suitable for the theoretical purposes of scientific psychology. At the heart of the Urgency Management System (UMS) theory of emotions I propose is the idea that an umotion is a special type of superordinate system which instantiates and manages an urgent action tendency by coordinating the operation of a cluster of cognitive, perceptual and motoric subsystems. Crucially, such superordinate system has a proper function by virtue of which it acquires a special kind of intentionality I call pragmatic. I argue that umotion is sufficiently similar in use to emotion to count as explicating it, it has precise rules of application, and it accommodates a number of central and widely shared intuitions about the emotions. My hope is that future emotion research will demonstrate the heuristic fruitfulness of the umotion concept for the sciences of mind.
7

Subjective Measures of Well-Being: A philosophical examination

Angner, Erik 30 September 2005 (has links)
Over the last couple of decades, as part of the rise of positive psychology, psychologists have given increasing amounts of attention to so-called subjective measures of well-being. These measures, which are supposed to represent the well-being of individuals and groups, are often presented as alternatives to more traditional economic ones for purposes of the articulation, implementation and evaluation of public policy. Unlike economic measures, which are typically based on data about income, market transactions and the like, subjective measures are based on answers to questions like: Taking things all together, how would you say things are these days would you say youre very happy, pretty happy, or not too happy these days? The aim of this dissertation is to explore issues in the philosophical foundations of subjective measures of well-being, with special emphasis on the manner in which the philosophical foundations of subjective measures differ from those of traditional economic measures. Moreover, the goal is to examine some arguments for and against these measures, and, in particular, arguments that purport to demonstrate the superiority of economic measures for purposes of public policy. My main thesis is that the claim that subjective measures of well-being cannot be shown to be inferior to economic measures quite as easily as some have suggested, but that they nevertheless are associated with serious problems, and that questions about the relative advantage of subjective and economic measures for purposes of public policy will depend on some fundamentally philosophical judgments, e.g. about the nature of well-being and the legitimate goals for public policy.
8

Modeling Evolution

Earnshaw-Whyte, Eugene 04 March 2013 (has links)
Evolution by natural selection began as a biological concept, but since Darwin it has been recognized to have broader application than biology. Applying evolutionary ideas beyond biology requires that the principles of evolution by natural selection be abstracted and generalized from the biological case. The received view of evolution by natural selection in biology is itself seriously flawed, which understandably renders the project of abstracting it and applying it elsewhere challenging. This thesis develops a generalized account of models of evolution by natural selection which is used to resolve various outstanding issues in the philosophy of biology. This also clarifies the methods and prospects of applying evolution by natural selection to non-biological domains. It does so by analyzing models of evolution both within biology and outside it, relying in particular on the contrast provided by models of firm competition in evolutionary economics. This analysis highlights those aspects of the classical view which must be abandoned or revised, and leads to the development of a neo-dynamical model of evolution, which is developed, explained, defended, and applied to problems in evolutionary biology and multi-level selection theory.
9

Computations and Computers in the Sciences of Mind and Brain

Piccinini, Gualtiero 18 November 2003 (has links)
Computationalism says that brains are computing mechanisms, that is, mechanisms that perform computations. At present, there is no consensus on how to formulate computationalism precisely or adjudicate the dispute between computationalism and its foes, or between different versions of computationalism. An important reason for the current impasse is the lack of a satisfactory philosophical account of computing mechanisms. The main goal of this dissertation is to offer such an account. I also believe that the history of computationalism sheds light on the current debate. By tracing different versions of computationalism to their common historical origin, we can see how the current divisions originated and understand their motivation. Reconstructing debates over computationalism in the context of their own intellectual history can contribute to philosophical progress on the relation between brains and computing mechanisms and help determine how brains and computing mechanisms are alike, and how they differ. Accordingly, my dissertation is divided into a historical part, which traces the early history of computationalism up to 1946, and a philosophical part, which offers an account of computing mechanisms. The two main ideas developed in this dissertation are that (1) computational states are to be identified functionally not semantically, and (2) computing mechanisms are to be studied by functional analysis. The resulting account of computing mechanism, which I call the functional account of computing mechanisms, can be used to identify computing mechanisms and the functions they compute. I use the functional account of computing mechanisms to taxonomize computing mechanisms based on their different computing power, and I use this taxonomy of computing mechanisms to taxonomize different versions of computationalism based on the functional properties that they ascribe to brains. By doing so, I begin to tease out empirically testable statements about the functional organization of the brain that different versions of computationalism are committed to. I submit that when computationalism is reformulated in the more explicit and precise way I propose, the disputes about computationalism can be adjudicated on the grounds of empirical evidence from neuroscience.
10

Representations of Space in Seventeenth Century Physics

Miller, David Marshall 02 June 2006 (has links)
The changing understanding of the universe that characterized the birth of modern science included a fundamental shift in the prevailing representation of space the presupposed conceptual structure that allows one to intelligibly describe the spatial properties of physical phenomena. At the beginning of the seventeenth century, the prevailing representation of space was spherical. Natural philosophers first assumed a spatial center, then specified meanings with reference to that center. Directions, for example, were described in relation to the center, and locations were specified by distance from the center. Through a series of attempts to solve problems first raised by the work of Copernicus, this Aristotelian, spherical framework was replaced by a rectilinear representation of space. By the end of the seventeenth century, descriptions were understood by reference to linear orientations, as parallel or oblique to a presupposed line, and locations were identified without reference to a privileged central point. This move to rectilinear representations of space enabled Gilbert, Kepler, Galileo, Descartes, and Newton to describe and explain the behavior of the physical world in the novel ways for which these men are justly famous, including their theories of gravitational attraction and inertia. In other words, the shift towards a rectilinear representation of space was essential to the fundamental reconception of the universe that gave rise to both modern physical theory and, at the same time, the linear way of experiencing the world that characterizes modern science.

Page generated in 0.3197 seconds