1 |
Equipoise and skepticism past, present and future /Witt, John R. January 2008 (has links)
Thesis (M.A.)--Indiana University, 2008. / Title from screen (viewed on June 2, 2009). Department of Philosophy, Indiana University-Purdue University Indianapolis (IUPUI). Advisor(s): Eric Mark Meslin, John J. Tilley, Timothy D. Lyons. Includes vita. Includes bibliographical references (leaves 48-52).
|
2 |
Equipoise and Skepticism: Past, Present and FutureWitt, John R. 22 August 2008 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Currently, the predominant view in research ethics maintains that physicians can morally justify offering randomized clinical trial enrollment to their patients only if some form of equipoise is present. Thus, the physician must experience (either individually or communally) a state of reasoned uncertainty concerning the relative merits of two or more competing treatments for a given disease before she may recommend that her patient participate in a clinical trial. Increasingly, however, this position has been subject to critical attention and considerable negative scrutiny. My argument engages this trend by turning to the history of philosophy; here I claim that the use of the term “equipoise” in the medical research context is extremely similar to terms and concepts from the philosophical tradition of skepticism, and as a result of this similarity it is possible to understand the principle of equipoise’s vulnerability to already published criticisms. A comparison of the criticisms of equipoise within the medical research literature to criticisms of philosophical skepticism reveals a potentially grim future for equipoise as a legitimate guiding principle for the ethical conduct of clinical research.
|
3 |
Ethos and Regula in Contemporary Clinical ResearchJanuary 2012 (has links)
abstract: With new trends in drug development and testing, it must be determined whether the current state of balance of ethos (the moral norm) and regula (the legal framework) can successfully protect patients while keeping the door to scientific innovation open. The rise of the Clinician Investigator (CI) in both academic and private research introduces a challenge to the protection of subjects in the conflicting dual role of physician and scientist. Despite the constant evolution of regulation and ethical standards, questions about the roles' combined effectiveness in relation to this challenge persist. Carl Elliot describes the suicide of a patient-subject enrolled in an industry-funded physician-run anti-psychotic pharmaceutical drug trial in a 2010 Mother Jones article. Elliot provides a personal account of discrepancies seen in the ethical principles of beneficence, respect for subjects and justice. Through analysis of the problems presented in the case as a model for potential dangers in clinical research, the effectiveness of ethics and law in protecting human subjects is examined. While the lag between ethical standard and regulation has historically shown to cause similar issues, the misconception of current regulation and ethical standards may be contributing to the decrease in subject protections. After IRB approval of subject protections in the research protocol, CIs have been shown to downgrade their responsibility to maintaining ethos through the course of the trial. And, despite their experience in patient-centered ethos as a physician, CIs may be inclined to substitute these values for the ethos of a researcher, with the goal to avoid therapeutic misconception. Maintaining personal responsibility for subjects beyond regulatory structure, and promoting the welfare of the subjects in regards to the ethical standard of research investigators, will provide added security for subjects and decrease opportunity for exploitation in future research. / Dissertation/Thesis / M.S. Biology 2012
|
4 |
Reimagining Human-Machine Interactions through Trust-Based FeedbackKumar Akash (8862785) 17 June 2020 (has links)
<div>Intelligent machines, and more broadly, intelligent systems, are becoming increasingly common in the everyday lives of humans. Nonetheless, despite significant advancements in automation, human supervision and intervention are still essential in almost all sectors, ranging from manufacturing and transportation to disaster-management and healthcare. These intelligent machines<i> interact and collaborate</i> with humans in a way that demands a greater level of trust between human and machine. While a lack of trust can lead to a human's disuse of automation, over-trust can result in a human trusting a faulty autonomous system which could have negative consequences for the human. Therefore, human trust should be <i>calibrated </i>to optimize these human-machine interactions. This calibration can be achieved by designing human-aware automation that can infer human behavior and respond accordingly in real-time.</div><div><br></div><div>In this dissertation, I present a probabilistic framework to model and calibrate a human's trust and workload dynamics during his/her interaction with an intelligent decision-aid system. More specifically, I develop multiple quantitative models of human trust, ranging from a classical state-space model to a classification model based on machine learning techniques. Both models are parameterized using data collected through human-subject experiments. Thereafter, I present a probabilistic dynamic model to capture the dynamics of human trust along with human workload. This model is used to synthesize optimal control policies aimed at improving context-specific performance objectives that vary automation transparency based on human state estimation. I also analyze the coupled interactions between human trust and workload to strengthen the model framework. Finally, I validate the optimal control policies using closed-loop human subject experiments. The proposed framework provides a foundation toward widespread design and implementation of real-time adaptive automation based on human states for use in human-machine interactions.</div>
|
5 |
STATE-BASED ANALYSIS OF GENERAL AVIATION LOSS OF CONTROL ACCIDENTS USING HISTORICAL DATA AND PILOTS’ PERSPECTIVESNeelakshi Majumdar (5930741) 22 April 2023 (has links)
<p>General Aviation (GA) encompasses all aircraft operations, excluding scheduled, military, and commercial operations. GA accidents comprise approximately 94% of all aviation accidents in the United States annually. 75% of these accidents involve pilot-related factors (pilot actions or conditions). Inflight loss of control means that the flight crew was unable to maintain control of the aircraft in flight. With almost 50% of loss of control accidents being fatal yearly, it continues to be the deadliest cause of GA accidents.</p>
<p><br></p>
<p>The most common approach to understanding accident causation is analyzing historical data from sources such as the National Transportation Safety Board (NTSB) database. The NTSB database has abundant rich information. In contrast to the extensive investigations into and detailed reports on commercial aviation accidents, GA accident investigations tend to be shorter, and the resulting reports tend to be brief and limited—especially regarding human factors’ role in accidents. Only relying on historical data cannot provide a complete understanding of accident causation.</p>
<p><br></p>
<p>There is a clear need to better understand the role of human factors involved in GA accidents to prevent such accidents and thus improve aviation safety. In my research, I focus on a specific type of accidents, inflight loss of control (LOC-I), the deadliest cause of GA accidents. I use historical data analysis and human-subjects research with pilots to investigate the role of human factors in loss of control accidents. Building on previous work, I created a state-based modeling framework that maximizes data extraction and insight formation from the NTSB accident reports by (1) developing a structured modeling language to represent accident causation in the form of states and triggers; (2) populating the language lexicon of states and triggers using insights from accident reports and pilots perspectives via surveys and interviews; and (3) applying Natural Language Processing (NLP) and machine learning techniques to automatically translate accident narratives into the language lexicon. The framework is focused on LOC-I but can be extended to other types of accidents. Findings from my study may help in consistent accident analysis, better accident reporting, and improving training methods and operating procedures for GA pilots.</p>
|
Page generated in 0.0551 seconds