• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

[en] TECHNOMICS AND DEMOGRAMMAR: LAW AND TECHNICS IN THE NOMOS OF PLATFORMS / [pt] TECNOMIA E DEMOGRAMÁTICA: DIREITO E TÉCNICA NO NOMOS DAS PLATAFORMAS

JOSE ANTONIO REGO MAGALHAES 14 June 2021 (has links)
[pt] Esta pesquisa se pretende um movimento de abertura tanto no campo da teoria do direito quanto no dos estudos de direito e tecnologia, e em especial na intersecção entre esses campos. Ela busca construir uma teoria do direito e da sua relação com as técnicas que permita navegar a passagem contemporânea entre o que chamo de tecnomia moderna e uma tecnomia das plataformas ligada à emergência da compu-tação em escala global, da governança algorítmica e da crise climática. Para tanto, busco, em primeiro lugar, contribuir para uma teoria especulativa do direito (nem uma teoria interna do direito moderno como o conhecemos, nem uma teoria crí-tica/desconstrutiva do direito). Procuro fazê-lo lendo dois pensadores chave do direito moderno, Hans Kelsen e Carl Schmitt, como complementares e à luz de cor-rentes da chamada virada especulativa da filosofia contemporânea. Kelsen é lido como um aceleracionista/inumanista, e Scmitt à luz da cosmo/geontopolítica, en-quanto a filosofia dos agenciamentos de Deleuze e Guattari serve como o fundo pelo qual tudo isso se articula. Em segundo lugar, mobilizo esse aparato a fim de especular sobre uma tecnomia das plataformas. Construo conceitos tecnômicos de código, plataforma, dispositivo, aplicativo, interface e usuário. A demogramática algorítmica é definida como operando por captação massiva de dados, traçado de grafos e modulação de condutas. Procuro traçar algumas tendências na transição à tecnomia das plataformas, e.g. tendências à contingência das posições de pessoa e de coisa, à indistinção entre norma e viés, à não-instrumentalidade das técnicas, à pluralidade dos mundos, à sobreposição de nomias, à imbricação entre cognição e governo etc. Termino sugerindo três modelos/paradigmas especulativos para a na-vegação da tecnomia das plataformas – um modelo inumano, fundado na hipótese da inteligência geral; um paradigma animista, ligado à hipótese/mito de Gaia, e, por fim, uma tentativa de composição entre os dois. / [en] This thesis intends an opening move in the field of legal theory as well as in that of law and technology studies, and especially in the intersection of the two. It tries to construct a theory of law and of its relation to technology that allows for the navi-gation of the contemporary passage from what I call modern technomics to a plat-form technomics linked to the emergence of planetary scale computation, algorith-mic governance and the climate crisis. To do that, I first try to make a contribution to a speculative legal thery (neither an internal theory of modern law as we know it, nor a critical/deconstructive legal theory). I do so by reading two key thinkers of modern law, Hans Kelsen and Carl Schmitt, as complementary, and through cur-rents of the so-called speculative turn in contemporary theory. Kelsen is read as an accelerationist/inhumanist, and Schmitt in light of cosmo/geontopolitics, while Deleuze and Guattari s assemblage theory serves as a background through which the rest is connected. Secondly, I mobilize this conceptual apparatus to speculate about platform technomics. I build technomic concepts of code, platform, device, application, interface and user. Algorithmic demogrammar is defined as operating by massive data collection, tracing of graphs and modulation of conducts. I look to trace some tendencies in the transition to platform technomics, e.g. to the contin-gency of the positions of person and thing, to the indistinction between norm and bias, to the non-instrumentality of technics, to the plurality of worlds, to the super-imposition of nomoi, to the confusion of cognition and governance etc. I finish by offering three speculative models/paradigms for the navigation of platform tech-nomics – an inhuman model, based on the hypothesis of general intelligence; an animistic paradigm, linked to the Gaia hypothesis/myth, and, finally, a tentative composition between the two.
2

Systèmes d’intelligence artificielle et santé : les enjeux d’une innovation responsable.

Voarino, Nathalie 09 1900 (has links)
L’avènement de l’utilisation de systèmes d’intelligence artificielle (IA) en santé s’inscrit dans le cadre d’une nouvelle médecine « haute définition » qui se veut prédictive, préventive et personnalisée en tirant partie d’une quantité inédite de données aujourd’hui disponibles. Au cœur de l’innovation numérique en santé, le développement de systèmes d’IA est à la base d’un système de santé interconnecté et auto-apprenant qui permettrait, entre autres, de redéfinir la classification des maladies, de générer de nouvelles connaissances médicales, ou de prédire les trajectoires de santé des individus en vue d’une meilleure prévention. Différentes applications en santé de la recherche en IA sont envisagées, allant de l’aide à la décision médicale par des systèmes experts à la médecine de précision (ex. ciblage pharmacologique), en passant par la prévention individualisée grâce à des trajectoires de santé élaborées sur la base de marqueurs biologiques. Des préoccupations éthiques pressantes relatives à l’impact de l’IA sur nos sociétés émergent avec le recours grandissant aux algorithmes pour analyser un nombre croissant de données relatives à la santé (souvent personnelles, sinon sensibles) ainsi que la réduction de la supervision humaine de nombreux processus automatisés. Les limites de l’analyse des données massives, la nécessité de partage et l’opacité des décisions algorithmiques sont à la source de différentes préoccupations éthiques relatives à la protection de la vie privée et de l’intimité, au consentement libre et éclairé, à la justice sociale, à la déshumanisation des soins et du patient, ou encore à la sécurité. Pour répondre à ces enjeux, de nombreuses initiatives se sont penchées sur la définition et l’application de principes directeurs en vue d’une gouvernance éthique de l’IA. L’opérationnalisation de ces principes s’accompagne cependant de différentes difficultés de l’éthique appliquée, tant relatives à la portée (universelle ou plurielle) desdits principes qu’à la façon de les mettre en pratique (des méthodes inductives ou déductives). S’il semble que ces difficultés trouvent des réponses dans la démarche éthique (soit une approche sensible aux contextes d’application), cette manière de faire se heurte à différents défis. L’analyse des craintes et des attentes citoyennes qui émanent des discussions ayant eu lieu lors de la coconstruction de la Déclaration de Montréal relativement au développement responsable de l’IA permet d’en dessiner les contours. Cette analyse a permis de mettre en évidence trois principaux défis relatifs à l’exercice de la responsabilité qui pourrait nuire à la mise en place d’une gouvernance éthique de l’IA en santé : l’incapacitation des professionnels de santé et des patients, le problème des mains multiples et l’agentivité artificielle. Ces défis demandent de se pencher sur la création de systèmes d’IA capacitants et de préserver l’agentivité humaine afin de favoriser le développement d’une responsabilité (pragmatique) partagée entre les différentes parties prenantes du développement des systèmes d’IA en santé. Répondre à ces différents défis est essentiel afin d’adapter les mécanismes de gouvernance existants et de permettre le développement d’une innovation numérique en santé responsable, qui doit garder l’humain au centre de ses développements. / The use of artificial intelligence (AI) systems in health is part of the advent of a new "high definition" medicine that is predictive, preventive and personalized, benefiting from the unprecedented amount of data that is today available. At the heart of digital health innovation, the development of AI systems promises to lead to an interconnected and self-learning healthcare system. AI systems could thus help to redefine the classification of diseases, generate new medical knowledge, or predict the health trajectories of individuals for prevention purposes. Today, various applications in healthcare are being considered, ranging from assistance to medical decision-making through expert systems to precision medicine (e.g. pharmacological targeting), as well as individualized prevention through health trajectories developed on the basis of biological markers. However, urgent ethical concerns emerge with the increasing use of algorithms to analyze a growing number of data related to health (often personal and sensitive) as well as the reduction of human intervention in many automated processes. From the limitations of big data analysis, the need for data sharing and the algorithmic decision ‘opacity’ stems various ethical concerns relating to the protection of privacy and intimacy, free and informed consent, social justice, dehumanization of care and patients, and/or security. To address these challenges, many initiatives have focused on defining and applying principles for an ethical governance of AI. However, the operationalization of these principles faces various difficulties inherent to applied ethics, which originate either from the scope (universal or plural) of these principles or the way these principles are put into practice (inductive or deductive methods). These issues can be addressed with context-specific or bottom-up approaches of applied ethics. However, people who embrace these approaches still face several challenges. From an analysis of citizens' fears and expectations emerging from the discussions that took place during the coconstruction of the Montreal Declaration for a Responsible Development of AI, it is possible to get a sense of what these difficulties look like. From this analysis, three main challenges emerge: the incapacitation of health professionals and patients, the many hands problem, and artificial agency. These challenges call for AI systems that empower people and that allow to maintain human agency, in order to foster the development of (pragmatic) shared responsibility among the various stakeholders involved in the development of healthcare AI systems. Meeting these challenges is essential in order to adapt existing governance mechanisms and enable the development of a responsible digital innovation in healthcare and research that allows human beings to remain at the center of its development.

Page generated in 0.0342 seconds