• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 33
  • 26
  • 22
  • 15
  • 7
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 335
  • 67
  • 61
  • 52
  • 40
  • 39
  • 38
  • 36
  • 34
  • 30
  • 29
  • 28
  • 28
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

The curious case of artificial intelligence : An analysis of the relationship between the EU medical device regulations and algorithmic decision systems used within the medical domain

Björklund, Pernilla January 2021 (has links)
The healthcare sector has become a key area for the development and application of new technology and, not least, Artificial Intelligence (AI). New reports are constantly being published about how this algorithm-based technology supports or performs various medical tasks. These illustrates the rapid development of AI that is taking place within healthcare and how algorithms are increasingly involved in systems and medical devices designed to support medical decision-making.  The digital revolution and the advancement of AI technologies represent a step change in the way healthcare may be delivered, medical services coordinated and well-being supported. It could allow for easier and faster communication, earlier and more accurate diagnosing and better healthcare at lower costs. However, systems and devices relying on AI differs significantly from other, traditional, medical devices. AI algorithms are – by nature – complex and partly unpredictable. Additionally, varying levels of opacity has made it hard, sometimes impossible, to interpret and explain recommendations or decisions made by or with support from algorithmic decision systems. These characteristics of AI technology raise important technological, practical, ethical and regulatory issues. The objective of this thesis is to analyse the relationship between the EU regulation on medical devices (MDR) and algorithmic decision systems (ADS) used within the medical domain. The principal question is whether the MDR is enough to guarantee safe and robust ADS within the European healthcare sector or if complementary (or completely different) regulation is necessary. In essence, it will be argued that (i) while ADS are heavily reliant on the quality and representativeness of underlying datasets, there are no requirements with regard to the quality or composition of these datasets in the MDR, (ii) while it is believed that ADS will lead to historically unprecedented changes in healthcare , the regulation lacks guidance on how to manage novel risks and hazards, unique to ADS, and that (iii) as increasingly autonomous systems continue to challenge the existing perceptions of how safety and performance is best maintained, new mechanisms (for transparency, human control and accountability) must be incorporated in the systems. It will also be found that the ability of ADS to change after market certification, will eventually necessitate radical changes in the current regulation and a new regulatory paradigm might be needed.
132

Constraint-Based Patterns : An examination of an algorithmic composition method

Lilja, Robin January 2021 (has links)
This thesis examines the composition of three different musical works through the use of my constraint-based patterns. I have explored the patterns through spreadsheets and also SuperCollider: a software for algorithmic composition and audio synthesis. The aim is to find how the patterns can be used to reach clear contrasts while maintaining coherence in the music, as well as finding challenges and possibilities within the patterns, while exploring how evaluation of the artistic results can contribute to improved methods. While I see the main method as autoethnographical, with the core focus on composing, I have also used feedback from other composers, and through focus groups, as a way to collect data. Throughout this thesis I describe my process of constructing patterns and composing music, accompanied by my reasoning and relevant feedback. My results from analyzing feedback, score and patterns are that while some ways of using the patterns are well suited for achieving contrast and coherence, problems arose related to (among other things) musical form and predictability. Evaluation through feedback and interviews resulted in a better understanding of the patterns, and different workflows allowed for different viability in the evaluation. The most valuable insight is that the greater the amount of composition parameters which are controlled through constraint-based patterns, the simpler each individual composition parameter has to be in order to reach contrasting results that I find satisfying. My conclusion is that I can therefore design each individual composition parameter with high coherence to reach contrasting results when the composition parameters are applied on the same musical structure.
133

Så styr algoritmer ditt flöde : En studie om personliga algoritmer inom sociala medier / How algorithms control your feed : A study about personal algorithms on social media

Larsson, Matilda, Nilsson, Nelly January 2021 (has links)
Sociala medier är ett fenomen som utvecklats drastiskt under de senaste 20 åren. Instagram och Facebook är två av de mest använda sociala medieplattformarna som har utvecklats markant med digitaliseringen. År 2016 ändrade Instagram sin algoritm, och 2018 gjorde Facebook samma sak. Algoritmerna syftar nu till att skräddarsy användarnas flöde för att visa det som är mest relevant till respektive konto och föra människor närmare varandra. Studien syftar till att undersöka hur Instagrams och Facebooks algoritmer fungerar och påverkar människor. Arbetet ska klargöra hur en algoritm uppfattas, hur plattformarna kan ha gått ifrån sina ursprungliga användningssyften och vilka etiska frågor som väckts till liv i processen. För att uppfylla studiens syfte skapas en undersökning utifrån tre generationer för att studera hur algoritmer påverkar människor och den roll som algoritmer spelar i samhället. Arbetet tar upp såväl positiva som negativa aspekter i relation till algoritmer. En kvalitativ undersökning med kvantifierbar data grundar strukturerade intervjuer i form av webbaserade frågeformulär. I ett första stadie ska respondenterna svara på ett frågeformulär. I nästa stadie ska respondenterna titta på dokumentärfilmen The Social Dilemma och sedan svara på efterföljande frågor i ett avslutande skede.
134

Exploring interactive features in auto-generated articles through article visualization

Abdel-Rehim, Ali January 2019 (has links)
News articles generated by artificial intelligence rather than human reporters are referred to as automated journalism. This thesis explores how to create a trustworthy representation of news articles that mainly are generated by algorithmic decisions. The hypothesis of this thesis takes the background (characteristics of the underlying system design) and the foreground (millennials news consumption behaviour) contexts into consideration in order to provide an optimal approach for trustworthy representation of auto-generated articles. A theory about algorithmic transparency in the news media has been investigated to reveal information about the systems selection processes. The principles of glanceability and the heuristic principles are applied to proposed design solutions (interactive features). The outcomes show that newsreaders are positive towards a system that is trying to encourage them to fact-check the articles. Additionally, the outcomes also contributed to the understanding of how newsreaders can consume auto-generated news.
135

Algorithmic Information Theory Applications in Bright Field Microscopy and Epithelial Pattern Formation

Mohamadlou, Hamid 01 May 2015 (has links)
Algorithmic Information Theory (AIT), also known as Kolmogorov complexity, is a quantitative approach to defining information. AIT is mainly used to measure the amount of information present in the observations of a given phenomenon. In this dissertation we explore the applications of AIT in two case studies. The first examines bright field cell image segmentation and the second examines the information complexity of multicellular patterns. In the first study we demonstrate that our proposed AIT-based algorithm provides an accurate and robust bright field cell segmentation. Cell segmentation is the process of detecting cells in microscopy images, which is usually a challenging task for bright field microscopy due to the low contrast of the images. In the second study, which is the primary contribution of this dissertation, we employ an AIT-based algorithm to quantify the complexity of information content that arises during the development of multicellular organisms. We simulate multicellular organism development by coupling the Gene Regulatory Networks (GRN) within an epithelial field. Our results show that the configuration of GRNs influences the information complexity in the resultant multicellular patterns.
136

Défis algorithmiques pour les simulations biomoléculaires et la conception de protéines / Algorithmic challenges for biomolecular simulations and protein design

Druart, Karen 05 December 2016 (has links)
Le dessin computationnel de protéine, ou CPD, est une technique qui permet de modifier les protéines pour leur conférer de nouvelles propriétés, en exploitant leurs structures 3D et une modélisation moléculaire. Pour rendre la méthode de plus en plus prédictive, les modèles employés doivent constamment progresser. Dans cette thèse, nous avons abordé le problème de la représentation explicite de la flexibilité du squelette protéique. Nous avons développé une méthode de dessin "multi-états", qui se base sur une bibliothèque discrète de conformations du squelette, établie à l'avance. Dans un contexte de simulation Monte Carlo, le paysage énergétique d'une protéine étant rugueux, les changements de squelettes ne peuvent etre acceptés que moyennant certaines précautions. Aussi, pour explorer ces conformations, en même temps que des mutations et des mouvements de chaînes latérales, nous avons introduit un nouveau type de déplacement dans une méthode Monte Carlo existante. Il s'agit d'un déplacement "hybride", où un changement de squelette est suivi d'une courte relaxation Monte Carlo des chaînes latérales seules, après laquelle un test d'acceptation est effectué. Pour respecter une distribution de Boltzmann des états, la probabilité doit avoir une forme précise, qui contient une intégrale de chemin, difficile à calculer en pratique. Deux approximations sont explorées en détail: une basée sur un seul chemin de relaxation, ou chemin "générateur" (Single Path Approximation, ou SPA), et une plus complexe basée sur un ensemble de chemins, obtenus en permutant les étapes élémentaires du chemin générateur (Permuted Path Approximation, ou PPA). Ces deux approximations sont étudiées et comparées sur deux protéines. En particulier, nous calculons les énergies relatives des conformations du squelette en utilisant trois méthodes différentes, qui passent réversiblement d'une conformation à l'autre en empruntent des chemins très différents. Le bon accord entre les méthodes, obtenu avec de nombreuses paramétrisations différentes, montre que l'énergie libre se comporte bien comme une fonction d'état, suggérant que les états sont bien échantillonnés selon la distribution de Boltzmann. La méthode d'échantillonnage est ensuite appliquée à une boucle dans le site actif de la tyrosyl-ARNt synthétase, permettant d'identifier des séquences qui favorisent une conformation, soit ouverte, soit fermée de la boucle, permettant en principe de contrôler ou redessiner sa conformation. Nous décrivons enfin un travail préliminaire visant à augmenter encore la flexibilité du squelette, en explorant un espace de conformations continu et non plus discret. Ce changement d'espace oblige à restructurer complètement le calcul des énergies et le déroulement des simulations, augmente considérable le coût des calculs, et nécessite une parallélisation beaucoup plus agressive du logiciel de simulation. / Computational protein design is a method to modify proteins and obtain new properties, using their 3D structure and molecular modelling. To make the method more predictive, the models need continued improvement. In this thesis, we addressed the problem of explicitly representing the flexibility of the protein backbone. We developed a "multi-state" design approach, based on a small library of backbone conformations, defined ahead of time. In a Monte Carlo framework, given the rugged protein energy landscape, large backbone motions can only be accepted if precautions are taken. Thus, to explore these conformations, along with sidechain mutations and motions, we have introduced a new type of Monte Carlo move. The move is a "hybrid" one, where the backbone changes its conformation, then a short Monte Carlo relaxation of the sidechains is done, followed by an acceptation test. To obtain a Boltzmann sampling of states, the acceptation probability should have a specific form, which involves a path integral that is difficult to calculate. Two approximate forms are explored: the first is based on a single relaxation path, or "generating path" (Single Path Approximation or SPA). The second is more complex and relies on a collection of paths, obtained by shuffling the elementary steps of the generating path (Permuted Path Approximation or PPA). These approximations are tested in depth and compared on two proteins. Free energy differences between the backbone conformations are computed using three different approaches, which move the system reversibly from one conformation to another, but follow very different routes. Good agreement is obtained between the methods and a wide range of parameterizations, indicating that the free energy behaves as a state function, as it should, and strongly suggesting that Boltzmann sampling is verified. The sampling method is applied to the tyrosyl-tRNA synthetase enzyme, allowing us to identify sequences that prefer either an open or a closed conformation of an active site loop, so that in principle we can control, or design the loop conformation. Finally, we describe preliminary work to make the protein backbone fully flexible, moving within a continuous and not a discrete space. This new conformational space requires a complete reorganization of the energy calculation and Monte Carlo simulation scheme, increases simulation cost substantially, and requires a much more aggressive parallelization of our software.
137

To whom do we listen, and why? : An exploratory study into how young adult consumers experience TikTok electronic word-of-mouth product recommendations

Dahlgren, Clara, Enshagen, Leon January 2023 (has links)
Background: Over time, social media platforms have become a part of people’s daily lives. Social media allows consumers to share thoughts, ideas, and experiences with other consumers, and eWOM evolved. Lately, TikTok has become one of the biggest social media platforms and, in turn, one of the biggest for eWOM; consumers use the app to share content, including product evaluation and recommendations. But there is limited understanding of how eWOM affects consumers’ purchasing behaviour on a platform like TikTok. Purpose: The purpose of the research is to explore the area of young adults’ consumer purchasing behaviour based on their experience of TikTok eWOM regarding beauty products and how the products’ virality affects their purchasing behaviour. Method: The study follows an interpretivist philosophy with an inductive research approach, as the aim is to understand and explore how eWOM affects consumer purchasing behaviour. Further, data were collected through four focus groups. Data were later analysed through inductive coding. Conclusion: The study concluded that four significant characteristics and evaluation factors of eWOM on TikTok affect the consumer's purchase behaviour. The characteristics are the content creators’ effect on the video message, quality, quantity, and following trends. Where quantity and following trends lead to virality and FOMO, which influences purchase behaviour. eWOM on TikTok was perceived to cause more impulsive purchases than on other social media platforms because of its algorithm and user platform trust. Impulsive purchases, in turn, influenced consumer purchase behaviour.
138

Hur upplever användare algoritmisk kuratering?

Gezelius, Valdemar, Hjorth, Patric January 2018 (has links)
Facebook kan ses som ett exempel på den nya webb som följt de riktlinjer som sammanställdes i Web 2.0. I riktlinjerna lades tonvikt på sociala medier och en interaktiv webb, centrerad kring användaren och användarskapat innehåll. Sedan starten har Facebook News Feed implementerat en algoritmisk kuratering (eng. algorithmic curation ) för att ge användaren vad han eller hon söker baserat på tidigare interaktion. Studier har gjorts för att analysera hur användarna upplever den underliggande algoritmen. Dessa har visat att människor ofta utvecklar egna teorier om hur algoritmen fungerar och att detta påverkar hur en användare interagerar med nyhetsflödet. Det sätt algoritmisk kuratering används idag i applikationer är främst genom att tillämpa en sömlös designfilosofi. Detta görs för att förenkla användarupplevelsen genom att lägga en “svart låda” över de underliggande processerna. I vår studie intervjuade vi tio studenter om sina tankar om nyhetsflödet och hur mycket kontroll de har att anpassa flödet efter personliga preferenser. Vi ämnar även bidra till diskussionen gällande fördelar och nackdelar med en så kallad seamful design kontra en seamless design och hur användaren upplever resultatet av ett flöde som har implementerat en algoritmisk kuratering. Resultatet visar att vi kan se samband mellan bristande tillit till systemets anpassningsverktyg och låg transparens. Våra användare uttryckte att de kunde förstå varför de får de resultat de får, men vi fann även att den stora delen av innehållet var irrelevant för deras intressen och personliga preferenser. Våra resultat visar ett behov av att fortsätta forskningen på olika plattformar för att se om en något ökad insyn kommer att bidra till att lita på frågor och underlätta manuell anpassning av system för högre relevans. / Facebook can be seen as an example of the new web emerging with the rise of Web 2.0. The guidelines of this new web put emphasis on social media and an interactive web, heavily centered around the user and user created content. Since the start the Facebook News Feed has implemented an algorithmic curation to give the user what he or she wants based on prior interaction. Studies have been made analyzing how users experience the underlying algorithm. They have shown that people often are developing their own theories on how the algorithm work and that this also affect the way a user interacts with the feed. The way algorithmic curation is used today in applications is mainly by embracing a seamless design philosophy. This is done to simplify the user experience by putting a “black box” over the underlying processes. In our study we interviewed ten students on their thoughts about the News Feed and the amount of control they have to adapt the feed according to personal preferences. We aim to contribute to the discussion on the benefits and downsides with a so called seamful design in contrast to a seamless design and how users experience the results of a feed that has implemented an algorithmic curation. Our results show connections between a lack of trust in the systems tools to customize the feed and low transparency. Our users expressed that they could understand as to why they got the results that they get, but we found that the vast majority of content was irrelevant to their interests and personal preferences. Our results indicate that there is a need to continue the research on different platforms to see if a somewhat heightened transparency will help with trust issues and ease manual adaptation of systems for higher relevancy.
139

Procedural Music Generation and Adaptation Based on Game State

Adam, Timothey Andrew 01 June 2014 (has links) (PDF)
Video game developers attempt to convey moods to emphasize their game's narrative. Events that occur within the game usually convey success or failure in some way meaningful to the story's progress. Ideally, when these events occur, the intended change in mood should be perceivable to the player. One way of doing so is to change the music. This requires musical tracks to represent many possible moods, states and game events. This can be very taxing on composers, and encoding the control flow (when to transition) of the tracks can prove to be tricky as well. This thesis presents AUD.js, a system developed for procedural music generation for JavaScript-based web games. By taking input from game events, the system can create music corresponding to various Western perceptions of music mood. The system was trained with classic video game music. Game development students rated the mood of 80 pieces, after which statistical representations of those pieces were extracted and added into AUD.js. AUD.js can adapt its generated music to new sets of input parameters, thereby updating the perceived mood of the generated music at runtime. We conducted A/B tests comparing static music, both composed and computer-generated, to dynamically adapting music. We find that AUD.js provides reasonably effective music for games, but that adaptiveness of the music does not necessarily improve player experience over composed music. By conducting a user study during Global Game Jam 2014, we also find that since AUD.js provides a software solution to music composition, it can be a useful tool for game music integration under time pressure.
140

Evaluating, Understanding, and Mitigating Unfairness in Recommender Systems

Yao, Sirui 10 June 2021 (has links)
Recommender systems are information filtering tools that discover potential matchings between users and items and benefit both parties. This benefit can be considered a social resource that should be equitably allocated across users and items, especially in critical domains such as education and employment. Biases and unfairness in recommendations raise both ethical and legal concerns. In this dissertation, we investigate the concept of unfairness in the context of recommender systems. In particular, we study appropriate unfairness evaluation metrics, examine the relation between bias in recommender models and inequality in the underlying population, as well as propose effective unfairness mitigation approaches. We start with exploring the implication of fairness in recommendation and formulating unfairness evaluation metrics. We focus on the task of rating prediction. We identify the insufficiency of demographic parity for scenarios where the target variable is justifiably dependent on demographic features. Then we propose an alternative set of unfairness metrics that measured based on how much the average predicted ratings deviate from average true ratings. We also reduce these unfairness in matrix factorization (MF) models by explicitly adding them as penalty terms to learning objectives. Next, we target a form of unfairness in matrix factorization models observed as disparate model performance across user groups. We identify four types of biases in the training data that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which learns personalized regularization parameters that directly address the data biases. PRL poses the hyperparameter search problem as a secondary learning task. It enables back-propagation to learn the personalized regularization parameters by leveraging the closed-form solutions of alternating least squares (ALS) to solve MF. Furthermore, the learned parameters are interpretable and provide insights into how fairness is improved. Third, we conduct theoretical analysis on the long-term dynamics of inequality in the underlying population, in terms of the fitting between users and items. We view the task of recommendation as solving a set of classification problems through threshold policies. We mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we prove that a system with the formulated dynamics always has at least one equilibrium, and we provide sufficient conditions for the equilibrium to be unique. We also show that, depending on the item category relationships and the recommendation policies, recommendations in one item category can reshape the user-item fit in another item category. To summarize, in this research, we examine different fairness criteria in rating prediction and recommendation, study the dynamic of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality. / Doctor of Philosophy / Recommender systems are information filtering tools that discover potential matching between users and items. However, a recommender system, if not properly built, may not treat users and items equitably, which raises ethical and legal concerns. In this research, we explore the implication of fairness in the context of recommender systems, study the relation between unfairness in recommender output and inequality in the underlying population, and propose effective unfairness mitigation approaches. We start with finding unfairness metrics appropriate for recommender systems. We focus on the task of rating prediction, which is a crucial step in recommender systems. We propose a set of unfairness metrics measured as the disparity in how much predictions deviate from the ground truth ratings. We also offer a mitigation method to reduce these forms of unfairness in matrix factorization models Next, we look deeper into the factors that contribute to error-based unfairness in matrix factorization models and identify four types of biases that contribute to higher subpopulation error. Then we propose personalized regularization learning (PRL), which is a mitigation strategy that learns personalized regularization parameters to directly addresses data biases. The learned per-user regularization parameters are interpretable and provide insight into how fairness is improved. Third, we conduct a theoretical study on the long-term dynamics of the inequality in the fitting (e.g., interest, qualification, etc.) between users and items. We first mathematically formulate the transition dynamics of user-item fit in one step of recommendation. Then we discuss the existence and uniqueness of system equilibrium as the one-step dynamics repeat. We also show that depending on the relation between item categories and the recommendation policies (unconstrained or fair), recommendations in one item category can reshape the user-item fit in another item category. In summary, we examine different fairness criteria in rating prediction and recommendation, study the dynamics of interactions between recommender systems and users, and propose mitigation methods to promote fairness and equality.

Page generated in 0.0498 seconds