Spelling suggestions: "subject:"computer ethics"" "subject:"coomputer ethics""
11 |
"The Machine Made Me Do It!" : An Exploration of Ascribing Agency and Responsibility to Decision Support SystemsHaviland, Hannah January 2005 (has links)
Are agency and responsibility solely ascribable to humans? The advent of artificial intelligence (AI), including the development of so-called “affective computing,” appears to be chipping away at the traditional building blocks of moral agency and responsibility. Spurred by the realization that fully autonomous, self-aware, even rational and emotionally-intelligent computer systems may emerge in the future, professionals in engineering and computer science have historically been the most vocal to warn of the ways in which such systems may alter our understanding of computer ethics. Despite the increasing attention of many philosophers and ethicists to the development of AI, there continues to exist a fair amount of conceptual muddiness on the conditions for assigning agency and responsibility to such systems, from both an ethical and a legal perspective. Moral and legal philosophies may overlap to a high degree, but are neither interchangeable nor identical. This paper attempts to clarify the actual and hypothetical ethical and legal situations governing a very particular type of advanced, or “intelligent,” computer system: medical decision support systems (MDSS) that feature AI in their system design. While it is well-recognized that MDSS can be categorized by type and function, further categorization of their mediating effects on users and patients is needed in order to even begin ascribing some level of moral or legal responsibility. I conclude that various doctrines of Anglo legal systems appear to allow for the possibility of assigning specific types of agency – and thus specific types of legal responsibility – to some types of MDSS. Strong arguments for assigning moral agency and responsibility are still lacking, however.
|
12 |
The Obvious & The Essential : Interpreting Software Development & Organizational ChangeÖhman Persson, Jenny January 2004 (has links)
Examining how our basic values affect development processes is the overall theme of this thesis. In practice, the question is investigated in relation to software development and organizational change and in research, in relation to science and its relationship to common sense, specifically within the area of Human Computer Interaction. The thesis discusses how it might be possible to discover what is essential for development processes and why the essential may be interpreted as something other than the simply obvious. This thesis examines ways of studying and understanding our social environment and development processes, particularly those concerning people, organizations and software. The empirical examples deal with a software development project and a project that scrutinized the strategy for a governmental authority’s business and information technology. Attitudes are discussed in terms of how they view the user, the customer, the software developers, the software, organizational and implementation processes, organizational management, aesthetic values, functionality and use, research, methods, paradigmatic approaches, ethical issues, psychological reactions, sociological prerequisites, categorizations of people and stress-related health consequences. One particular prerequisite for developing superior computer-supported office work has repeatedly presented itself: an open, questioning attitude towards the software development process, towards organizational change and towards the people working in the organizations. A similar attitude towards research and its design can be crucial to the development of new knowledge. This circumstance can be interpreted as an indication of how important it is that we be aware of and question our preconceived notions, in order to develop an autonomous behavior where we take responsibility for our actions. By doing so, we can avoid misinterpretations and not get trapped into making categorizations that are simply obvious. This is essential and must be emphasized in our search for the path to »healthy work«.
|
13 |
Etický rozměr content marketingu na sociálních sítích / The ethical dimension of content marketing on social networksŠvamberk, Viktor January 2020 (has links)
This diploma thesis describes and analyzes the ethical dimension of content marketing practice on social networks. The rapid pace of technological progress, shaping the birth of participatory Web 2.0 and social networks, has led to an ever-increasing gap between the everyday online experience and the discourse of applied ethics. Three main goals of this thesis are met. First, current academical knowledge (and its limits) in the field of marketing and information ethics is described. Secondly, with the help of Luciano Floridi's Information Ethics, a unified ethical framework for assessing specific moral dilemmas on social networks is constructed. Lastly, this framework is verified in an extensive Reddit case study, where each of the six pillars of the constructed ethical framework (which affects fundamental parameters and manifestations indicating unethical activity) are tested on manifestations of unacknowledged marketing activity. At the same time, the existence and emergence of the so-called ethical vacuum are illustrated, indicating that there is a need to update the theoretical approaches of applied ethics for the specifics of social networks and the infosphere in general.
|
14 |
On the ethical implications of personal health monitoringMittelstadt, Brent January 2013 (has links)
Recent years have seen an influx of medical technologies capable of remotely monitoring the health and behaviours of individuals to detect, manage and prevent health problems. Known collectively as personal health monitoring (PHM), these systems are intended to supplement medical care with health monitoring outside traditional care environments such as hospitals, ranging in complexity from mobile devices to complex networks of sensors measuring physiological parameters and behaviours. This research project assesses the potential ethical implications of PHM as an emerging medical technology, amenable to anticipatory action intended to prevent or mitigate problematic ethical issues in the future. PHM fundamentally changes how medical care can be delivered: patients can be monitored and consulted at a distance, eliminating opportunities for face-to-face actions and potentially undermining the importance of social, emotional and psychological aspects of medical care. The norms evident in this movement may clash with existing standards of 'good' medical practice from the perspective of patients, clinicians and institutions. By relating utilitarianism, virtue ethics and theories of surveillance to Habermas' concept of colonisation of the lifeworld, a conceptual framework is created which can explain how PHM may be allowed to change medicine as a practice in an ethically problematic way. The framework relates the inhibition of virtuous behaviour among practitioners of medicine, understood as a moral practice, to the movement in medicine towards remote monitoring. To assess the explanatory power of the conceptual framework and expand its borders, a qualitative interview empirical study with potential users of PHM in England is carried out. Recognising that the inherent uncertainty of the future undermines the validity of empirical research, a novel epistemological framework based in Habermas' discourse ethics is created to justify the empirical study. By developing Habermas' concept of translation into a procedure for assessing the credibility of uncertain normative claims about the future, a novel methodology for empirical ethical assessment of emerging technologies is created and tested. Various methods of analysis are employed, including review of academic discourses, empirical and theoretical analyses of the moral potential of PHM. Recommendations are made concerning ethical issues in the deployment and design of PHM systems, analysis and application of PHM data, and the shortcomings of existing research and protection mechanisms in responding to potential ethical implications of the technology.
|
Page generated in 0.0775 seconds