• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 34
  • 26
  • 22
  • 15
  • 7
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 338
  • 68
  • 61
  • 52
  • 40
  • 39
  • 38
  • 36
  • 34
  • 30
  • 29
  • 29
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

The narrative manipulation of human subjectivity : a machinic exploration of psyche as artificial ready-made

Desrochers Ayotte, Alexandre 03 1900 (has links)
Avec l’accélération de la production narrative au vingt-et-unième siècle, ainsi que les tentatives d’appropriation des moyens de production et des mythes collectifs par le marché, il y a lieu de questionner l’effet des nouveaux mythes sur la psyché humaine. L’ingestion persistante et soutenue de récits infusés de symboles capitalistes produit une mutation de la subjectivité humaine, dans un mouvement vers une certaine homogénéité. Par une relecture de la Poétique d’Aristote, la première section de cette thèse propose une vision politique de la catharsis, qui théorise le récepteur de toute narration comme programmable et pouvant être guidé vers des attitudes et des postures. Cette conception mène directement à une définition machinique du récit et la notion d’asservissement machinique, qui conçoit la subjectivité humaine comme engagée dans des processus de connectivité où elle perd certains fragments de son unicité. La troisième foulée de cette thèse théorise la société de contrôle de Deleuze et ses héritiers conceptuels, le capitalisme de surveillance et l’ectosubjectivité. Ces deux notions tentent de percevoir le régime de pouvoir du vingt-et-unième siècle, fondé sur les données personnelles et la standardisation de la psyché humaine. Finalement, le quatrième et dernier chapitre de cette recherche se penche sur la notion de vérité telle que décrite par Michel Foucault dans Le Courage de la Vérité. Dans la notion Grecque, et particulièrement son développement platonicien, de parrhēsia, Foucault identifie l’homogénéité d’une vérité basée sur une hiérarchie éthique, et son renversement par les Cyniques en animalité assumée qui ouvre de nouveaux territoires d’existence et de vérité. En somme, ce renversement nous permet de concevoir ce que serait une existence libre, hors d’un régime de vérité qui désubjective et rend homogène. / With the acceleration of narrative production in the twenty-first century, as well as the attempted appropriation of means of production and collective myths by market economy, there is an increasing need to question the effect of these new myths on the human psyche. The persistent and sustained ingestion of narratives infused with capitalist symbols produces a transformation of subjectivity, which mutates from unicity to increased standardization. Through a rereading of Aristotle’s Poetics, the first section of this thesis offers a political conception of catharsis that theorizes the receiver of narratives as programmable and guidable towards attitudes and postures. This conception leads directly to a machinic definition of the narrative and the concept of machinic enslavement. These concepts conceive of human subjectivity as engaged in processes of networking where it loses fragments of its unicity. The third chapter of this thesis theorizes Deleuze's society of control and its conceptual successors, surveillance capitalism and ectosubjectivity. Both these concepts attempt to theorize the reigning regime of power of the twenty-first century, based on personal data and the standardization of the human psyche. Finally, the fourth and final chapter of this research analyzes the notion of truth as described by Michel Foucault in The Courage of Truth. In the Greek notion of parrhēsia, and especially in its platonic development, Foucault identifies the homogeneity of a truth system based on a hierarchization of ethics. The reversal of this system by the Cynics into an assumed bestiality is crucial to this thesis as it opens new territories of existence and truth. In sum, the Cynic reversal permits us to conceive of a free existence, outside of a regime of truth that desubjectivates and homogenizes.
332

Game theoretical characterization of the multi-agent network expansion game

Caye, Flore 04 1900 (has links)
Dans les chaînes d’approvisionnement, les producteurs font souvent appel à des entreprises de transport pour livrer leurs marchandises. Cela peut entraîner une concurrence entre les transporteurs qui cherchent à maximiser leurs revenus individuels en desservant un produc- teur. Dans ce travail, nous considérons de telles situations où aucun transporteur ne peut garantir la livraison de la source à la destination en raison de son activité dans une région restreinte (par exemple, une province) ou de la flotte de transport disponible (par exemple, uniquement le transport aérien), pour ne citer que quelques exemples. La concurrence est donc liée à l’expansion de la capacité de transport des transporteurs. Le problème décrit ci-dessus motive l’étude du jeu d’expansion de réseau multi-agent joué sur un réseau appartenant à de multiples transporteurs qui choisissent la capacité de leurs arcs. Simultanément, un client cherche à maximiser le flux qui passe par le réseau en décidant de la politique de partage qui récompense chacun des transporteurs. Le but est de déterminer un équilibre de Nash pour le jeu, en d’autres termes, la strategie d’extension de capacité et de partage la plus rationnelle pour les transporteurs et le client, respectivement. Nous rappelons la formulation basée sur les arcs proposée dans la littérature, dont la solution est l’équilibre de Nash avec le plus grand flux, et nous identifions ses limites. Ensuite, nous formalisons le concept de chemin profitable croissant et nous montrons son utilisation pour établir les conditions nécessaires et suffisantes pour qu’un vecteur de stratégies soit un équilibre de Nash. Ceci nous conduit à la nouvelle formulation basée sur le chemin. Enfin, nous proposons un renforcement du modèle basé sur les arcs et une formulation hybride arc- chemin. Nos résultats expérimentaux soutiennent la valeur des nouvelles inégalités valides obtenues à partir de notre caractérisation des équilibres de Nash avec des chemins croissants rentables. Nous concluons ce travail avec les futures directions de recherche pavées par les contributions de cette thèse. / In supply chains, manufacturers often use transportation companies to deliver their goods. This can lead to competition among carriers seeking to maximize their individual revenues by serving a manufacturer. In this work, we consider such situations where no single carrier can guarantee delivery from source to destination due to its operation in a restricted region (e.g., a province) or the available transportation fleet (e.g., only air transportation), to name a few examples. Therefore, competition is linked to the expansion of transportation capacity by carriers. The problem described above motivates the study of the multi-agent network expansion game played over a network owned by multiple transporters who choose their arcs’ capacity. Simultaneously, a customer seeks to maximize the flow that goes through the network by deciding the sharing policy rewarding each of the transporters. The goal is to determine a Nash equilibrium for the game, in simple words, the most rational capacity expansion and sharing policy for the transporters and the customer, respectively. We recap the arc-based formulation proposed in literature, whose solution is the Nash equilibirum with the largest flow, and we identify its limitations. Then, we formalize the concept of profitable increasing path and we show its use to establish necessary and sufficient conditions for a vector of strategies to be a Nash equilibrium. This lead us to the first path-based formulation. Finally, we propose a strengthening for the arc-based model and a hybrid arc-path formulation. Our experimental results support the value of the new valid inequalities obtained from our characterization of Nash equilibria with profitable increasing paths. We conclude this work with the future research directions paved by the contributions of this thesis.
333

Reinforcement Learning for Market Making / Förstärkningsinlärningsbaserad likviditetsgarantering

Carlsson, Simon, Regnell, August January 2022 (has links)
Market making – the process of simultaneously and continuously providing buy and sell prices in a financial asset – is rather complicated to optimize. Applying reinforcement learning (RL) to infer optimal market making strategies is a relatively uncharted and novel research area. Most published articles in the field are notably opaque concerning most aspects, including precise methods, parameters, and results. This thesis attempts to explore and shed some light on the techniques, problem formulations, algorithms, and hyperparameters used to construct RL-derived strategies for market making. First, a simple probabilistic model of a limit order book is used to compare analytical and RL-derived strategies. Second, a market making agent is trained on a more complex Markov chain model of a limit order book using tabular Q-learning and deep reinforcement learning with double deep Q-learning. Results and strategies are analyzed, compared, and discussed. Finally, we propose some exciting extensions and directions for future work in this research field. / Likviditetsgarantering (eng. ”market making”) – processen att simultant och kontinuerligt kvotera köp- och säljpriser i en finansiell tillgång – är förhållandevis komplicerat att optimera. Att använda förstärkningsinlärning (eng. ”reinforcement learning”) för att härleda optimala strategier för likviditetsgarantering är ett relativt outrett och nytt forskningsområde. De flesta publicerade artiklarna inom området är anmärkningsvärt återhållsamma gällande detaljer om de tekniker, problemformuleringar, algoritmer och hyperparametrar som används för att framställa förstärkningsinlärningsbaserade strategier. I detta examensarbete så gör vi ett försök på att utforska och bringa klarhet över dessa punkter. Först används en rudimentär probabilistisk modell av en limitorderbok som underlag för att jämföra analytiska och förstärkningsinlärda strategier. Därefter brukas en mer sofistikerad Markovkedjemodell av en limitorderbok för att jämföra tabulära och djupa inlärningsmetoder. Till sist presenteras även spännande utökningar och direktiv för framtida arbeten inom området.
334

Enshittification av sociala medier : En studie i digitala fotbojor / Enshittification of social media : A study in the shackles of the digital age

Johansson, Carl-Johan, Kovacevic Gahne, Franco January 2024 (has links)
Majoriteten av jordens befolkning har profiler på något av den handfull av sociala nätverk som dominerar Internet. Samtidigt som dessa tjänster växer i dominans och användarantal upplevs det ofta att användarupplevelsen blir allt sämre. Kritik har bland annat riktats mot hur de låter desinformation spridas och hur de monetäriserar användardata för att sälja målstyrd reklam. Enshittification är ett fenomen som definierades av techjournalisten och -kritikern Cory Doctorow för att beskriva hur dessa plattformar aktivt gör upplevelsen sämre för användarna. Enshittification gör gällande att användare stannar på plattformar som exploaterar dem på grund av höga omställningskostnader, såväl ekonomiska som sociala, trots en uppenbar försämring av den digitala miljön. Forskningsarbetet som presenteras här är en teoriutvecklande kritisk studie av enshittification och hur det manifesteras inom sociala medieplattformar. Studiens syfte är att grunda fenomenet och etablera en dialog kring enshittification i en IS-kontext. Den erbjuder insikter i enshittifications underliggande orsaker och dess konsekvenser för användarna, men även i hur man kan motverka fenomenet. Studien argumenterar även för att kritisk teori behövs inom IS för att kunna analysera sådana här fenomen och relaterade sociala aspekter inom informationsteknik. / A majority of the world’s population today resides on social media. At the same time a small group of platforms dominate the social media landscape. While these services have experienced great growth both in terms of registered users and market dominance, they’ve also been heavily criticized for the way the user experience seems to have deteriorated over time, particularly in respect to how disinformation is spreading throughout the networks and the way these services monetize their users’ personal data. Enshittification is a phenomenon, coined by the tech journalist and -critic Cory Doctorow, that describes the way these platforms actively work to make the user experience worse. The phenomenon asserts that people will keep using services that exploit them due to high switching costs—of either personal or economic nature, or both—even though the user experience deteriorates. This study offers a grounding theory of enshittification as a phenomenon, along with a critical perspective of its manifestation in social networks. Its purpose is to create a definition of the phenomenon and to establish a dialogue within the research field of information systems. The study also offers greater insight into the underpinnings of enshittification and its consequences for the end users, along with a critical reflection over possible mitigation strategies. It also argues that critical theory is needed in the field of IS research in order to be able to analyze phenomenons like enshittification and similar social aspects that manifest themselves within information technology.
335

Malicious Intent Detection Framework for Social Networks

Fausak, Andrew Raymond 05 1900 (has links)
Many, if not all people have online social accounts (OSAs) on an online community (OC) such as Facebook (Meta), Twitter (X), Instagram (Meta), Mastodon, Nostr. OCs enable quick and easy interaction with friends, family, and even online communities to share information about. There is also a dark side to Ocs, where users with malicious intent join OC platforms with the purpose of criminal activities such as spreading fake news/information, cyberbullying, propaganda, phishing, stealing, and unjust enrichment. These criminal activities are especially concerning when harming minors. Detection and mitigation are needed to protect and help OCs and stop these criminals from harming others. Many solutions exist; however, they are typically focused on a single category of malicious intent detection rather than an all-encompassing solution. To answer this challenge, we propose the first steps of a framework for analyzing and identifying malicious intent in OCs that we refer to as malicious mntent detection framework (MIDF). MIDF is an extensible proof-of-concept that uses machine learning techniques to enable detection and mitigation. The framework will first be used to detect malicious users using solely relationships and then can be leveraged to create a suite of malicious intent vector detection models, including phishing, propaganda, scams, cyberbullying, racism, spam, and bots for open-source online social networks, such as Mastodon, and Nostr.
336

Classifiers for Discrimination of Significant Protein Residues and Protein-Protein Interaction Using Concepts of Information Theory and Machine Learning / Klassifikatoren zur Unterscheidung von Signifikanten Protein Residuen und Protein-Protein Interaktion unter Verwendung von Informationstheorie und maschinellem Lernen

Asper, Roman Yorick 26 October 2011 (has links)
No description available.
337

Towards topology-aware Variational Auto-Encoders : from InvMap-VAE to Witness Simplicial VAE / Mot topologimedvetna Variations Autokodare (VAE) : från InvMap-VAE till Witness Simplicial VAE

Medbouhi, Aniss Aiman January 2022 (has links)
Variational Auto-Encoders (VAEs) are one of the most famous deep generative models. After showing that standard VAEs may not preserve the topology, that is the shape of the data, between the input and the latent space, we tried to modify them so that the topology is preserved. This would help in particular for performing interpolations in the latent space. Our main contribution is two folds. Firstly, we propose successfully the InvMap-VAE which is a simple way to turn any dimensionality reduction technique, given its embedding, into a generative model within a VAE framework providing an inverse mapping, with all the advantages that this implies. Secondly, we propose the Witness Simplicial VAE as an extension of the Simplicial Auto-Encoder to the variational setup using a Witness Complex for computing a simplicial regularization. The Witness Simplicial VAE is independent of any dimensionality reduction technique and seems to better preserve the persistent Betti numbers of a data set than a standard VAE, although it would still need some further improvements. Finally, the two first chapters of this master thesis can also be used as an introduction to Topological Data Analysis, General Topology and Computational Topology (or Algorithmic Topology), for any machine learning student, engineer or researcher interested in these areas with no background in topology. / Variations autokodare (VAE) är en av de mest kända djupa generativa modellerna. Efter att ha visat att standard VAE inte nödvändigtvis bevarar topologiska egenskaper, det vill säga formen på datan, mellan inmatningsdatan och det latenta rummet, försökte vi modifiera den så att topologin är bevarad. Det här skulle i synnerhet underlätta när man genomför interpolering i det latenta rummet. Denna avhandling består av två centrala bidrag. I första hand så utvecklar vi InvMap-VAE, som är en enkel metod att omvandla vilken metod inom dimensionalitetsreducering, givet dess inbäddning, till en generativ modell inom VAE ramverket, vilket ger en invers avbildning och dess tillhörande fördelar. För det andra så presenterar vi Witness Simplicial VAE som en förlängning av en Simplicial Auto-Encoder till dess variationella variant genom att använda ett vittneskomplex för att beräkna en simpliciel regularisering. Witness Simplicial VAE är oberoende av dimensionalitets reducerings teknik och verkar bättre bevara Betti-nummer av ett dataset än en vanlig VAE, även om det finns utrymme för förbättring. Slutligen så kan de första två kapitlena av detta examensarbete också användas som en introduktion till Topologisk Data Analys, Allmän Topologi och Beräkningstopologi (eller Algoritmisk Topologi) till vilken maskininlärnings student, ingenjör eller forskare som är intresserad av dessa ämnesområden men saknar bakgrund i topologi.
338

From the Boardroom to the Bedroom: Sexual Ecologies in the Algorithmic Age

Bowen, Bernadette 13 May 2022 (has links)
No description available.

Page generated in 0.0497 seconds