311 |
Under the Guise of Machine Neutrality : Machine Learning Uncertainty Exploration as Design Material to Identify Gender Bias in AI SystemsVeloso, Gelson January 2022 (has links)
Structural gendered inequality permeates intelligent systems, shaping everyday lives and reinforcing gender oppression. This study investigates how uncertainty, as an inherent characteristic of Machine Learning (ML) models, can be translated as a design material to highlight gender bias in Artificial Intelligence (AI) systems. It follows an HCI feminist methodology with a threefold horizon: the re-conceptualisation of the design space that considers human and non-human perspectives (Giaccardi & Redström, 2020); the exploration of ML uncertainty as design materiality (Benjamin et al., 2020) to underscore imbued gender inequality in intelligent systems; and the disputed relations of ML uncertainty as materiality with unpredictability in Explainable AI systems, more specifically Graspable AI (Ghajargar et al., 2021, 2022). As a critical exploratory process, the knowledge contribution is the development of a set of guidelines for the design of better and more equal ML systems.
|
312 |
Hvorvidt HR har lykkes med å iverksette nødvendige strategier i en digital verden ”Mennesker er vår viktigste ressurs” : Whether HR has succeeded in implementing necessary strategies in a digital world “People are our most important asset”Gundersen, Tone January 2023 (has links)
No description available.
|
313 |
Exploring the Role of Virtual Companions in Alleviating Loneliness Among Young AdultsBorvén, Mette January 2023 (has links)
Loneliness among young adults has risen over the past years and today around 62% ofyoung adults say they have been or are negatively affected by it (Stickley & Koyangi, 2016). Thiscan lead to depression, anxiety but in the worst case addiction, self-harming or suicidal thoughts.This study examines how AI-based interventions, such as virtual companions, can be designedand implemented to reduce loneliness among young adults. By exploring the possibilities andchallenges of the design and development of creating an AI companion. This study uses asystematic literature study and semi-structured interview based on 20 participants. The semi-structured interview was then analyzed with a thematic analysis. Which then was visualized witha concept map. The findings of the study presents four themes: “Empowering Loneliness Combatwith AI”, “Treading the Thin Line of Human-like AI”, “Irreplaceable Human Connection” and“Nurturing Trust in the AI Ecosystem”. With this knowledge that this study presents, it can helpsupport the knowledge needed to design and develop an AI companion to ease loneliness inyoung adults.
|
314 |
Artificial Intelligence for Graphical User Interface Design : Analysing stakeholder perspectives on AI integration in GUI development and essential characteristics for successful implementationHenriksson, Linda, Wingårdh, Anna January 2023 (has links)
In today's world, Artificial Intelligence (AI) has seamlessly integrated into ourdaily lives without us even realising it. We witness AI-driven innovations allaround us, subtly enhancing our routines and interactions. Ranging from Siri, Alexa, to Google Assistant, voice assistants have become prime examples of AI technology, assisting us with simple tasks and responding to our inquiries. As these once futuristic ideas have now become an indispensable part of our everyday reality, they also become relevant for the field of GUI. This thesis explores the views of stakeholders, such as designers, alumni, students and teachers, on the inevitable implementation of artificial intelligence(AI) into the graphical user interface (GUI) development. It aims to provide understanding on stakeholders thoughts and needs with the focus on two research questions: RQ1: What are the viewpoints of design stakeholders regarding using Artificial Intelligence tools into GUI development? And RQ2: What characteristics should be considered in including AI in GUI development? To collect data, the thesis will use A/B testing and question sessions. In the A/B testing, participants will watch two videos, one showing how to digitise asketch using an AI tool (Uizard) and the other showing how to do the samething using a traditional GUI design tool (Figma). Afterwards, the participants will answer questions about their experience regarding the two different ways to digitise a sketch. The study highlighted a generally positive outlook among the participating stakeholders. Students and alumni expressed more enthusiasm whereas experienced professionals and teachers were cautious yet open to AI integration. Concerns werevoiced regarding potential drawbacks, including limited control and issues of over-reliance. The findings underscored AI's potential to streamline tasks but also emphasised the need for manual intervention and raised questions about maintaining control and creative freedom. We hope this work serves as a valuable starting point for other researchers interested in exploring this topic.
|
315 |
Krav och metoder för insamling av data för maskinlärning inom svensk byggindustri : En utforskning av behov och anpassning av datasetLarsson, Isabell January 2023 (has links)
Ur ett pågående forskningsprojekt om artificiell intelligens (AI) har ett behov vuxit fram att hitta en metod för datainsamling inom svensk byggkontext, detta examensarbete hade som syfte är att uppfylla det behovet. Forskningen är inom området maskinlärning (ML) och dataseende (CV) i bygg och anläggningsbranschen. Där dataseende innebär i stora drag går ut på att en dator extraherar information ur visuella data, det vill säga bilder och filmer. Datainsamlingen behöver vara av tillräcklig omfattning för att skapa ett dataset för maskininlärning, med målet att Boston Dynamics SPOT ska kunna användas i bygg - och anläggningsbranschen. Fyra metoder för datainsamling har utvärderats och ställts mot varandra i syfte att hitta den metod som ger bäst förutsättningar att bygga ett dataset. Det bästa förhållningsättet baserat på studiens förutsättningar var att använda en experimentell metod med induktiv karaktär, alltså har studiens fokus främst legat på metodutveckling baserat på empiri och inte på teori. Breda frågeställningar har ställts för att hitta den bästa datainsamlingsmetoden för maskininlärning, dessa frågeställningar har besvarats genom att studien strukturerats upp i tre huvuddelar: en teoretisk, en empirisk och en teknisk del. Den teoretiska delen har varit en mindre kontextuell litteraturstudie, som gett en djupare förståelse för AI. Fokus har varit på delar som ansetts mest relevanta för studien som övervakat datorseende och aktuell forskning på AI:s appliceringsområden i byggkontext. I den empiriska studien har fallstudier genomförts där data samlas in genom de olika metoderna och utvärderats ur olika synpunkter för att avgöra vilken metod som var mest hållbar i praktiken. Den tekniska delen fokuserade främst på annotering och träning av data. Resultatet blev en siffra mellan 0 och 1 där 1 var bäst. I den tekniska delen gjordes även en utvärdering av operativsystem för ML. De fyra metoder som utvärderats för datainsamling var: 1. Manuell fotografering av gipsskivor på byggarbetsplatser. Där en byggarbetsplats besöktes och strax under 200 bilder samlades in. Modellen som tränades med data från metod ett gav ett resultat på 0.46. 2. Den andra metoden som testades var att utnyttja den arbetskraft som var belägen på byggarbeten. Tanken var att arbetspersonalen skulle fotografera gipsskivor under arbetsdagen och skicka in bilderna till gemensam samlingsplats. Denna metod avfärdades av byggföretaget i fråga, dels till följd av organisatoriska problem, dels av äganderättssynpunkt. 3. I den tredje metoden undersöktes möjligheten att använda ett bildgalleri där historiska data samlats. Ett flertal av dessa bildgallerier undersöktes där behörighet till ett av gallerierna gavs till projektet och en anställd på byggföretaget gick igenom ett flertal andra bildgallerier. Totalt uppskattades att 4000 - 5000 bilder genomsöktes varifrån ett dataset av 38 bilder samlades in. Resultatet vid träningen av modellen från metod tre var 0. 4. Den fjärde metoden var att generera syntetiska bilder. En enkel modell modellerades upp i Revit där totalt 740 bilder samlades in. Vid utvärdering av modellen som tränats på metod fyras bilder var resultatet 0.9. När valideringsbilderna ersattes från syntetiska till verkliga bilder blev resultatet i stället 0.32. Vid närmare undersökning visades det att modellen kände igen gipsskivorna, men förvirring uppstod av bruset i bakgrunden, där spackel på väggen i den verkliga bilden misstogs för gipsskivor. Därför testades en hybridmetod där ett fåtal verkliga bilder lades till i träningsdata. Resultatet av hybridmetoden blev 0.66. Sammanfattningsvis visade resultaten av denna studie att ingen av de befintliga metoderna, i deras nuvarande former, lämpar sig för maskininlärningssyften på byggplatsen. Det framkom dock att det kan vara värt att utforska hybridmetoder närmare som en potentiell lösning. Ett intressant forskningsområde skulle vara att undersöka hybridmetoder som kombinerar element från metod ett och fyra, som tidigare beskrivits. En alternativ hybridmetod kan också utforskas, där omgivningen från bildgalleriet inkorporeras i en virtuell miljö och data samlas in med liknande processer som i metod fyra. Dessa hybridmetoder kan erbjuda fördelar som övervinner de begränsningar som identifierades i de enskilda metoderna och därmed möjliggöra effektivare och mer tillförlitlig datainsamling för maskininlärningsapplikationer inom den studerade kontexten. Framtida forskning bör inriktas på att utforska och utvärdera dessa hybridmetoder för att bättre förstå deras potential och fördelar inom området maskininlärning och datavetenskap.
|
316 |
A Jagged Little Pill: Ethics, Behavior, and the AI-Data NexusKormylo, Cameron Fredric 21 December 2023 (has links)
The proliferation of big data and the algorithms that utilize it have revolutionized the way in which individuals make decisions, interact, and live. This dissertation presents a structured analysis of behavioral ramifications of artificial intelligence (AI) and big data in contemporary society. It offers three distinct but interrelated explorations. The first chapter investigates consumer reactions to digital privacy risks under the General Data Protection Regulation (GDPR), an encompassing regulatory act in the European Union aimed at enhancing consumer privacy controls. This work highlights how consumer behavior varies substantially between high- and low-risk privacy settings. These findings challenge existing notions surrounding privacy control efficacy and suggest a more complex consumer risk assessment process. The second study shifts to an investigation of historical obstacles to consumer adherence to expert advice, specifically betrayal aversion, in financial contexts. Betrayal aversion, a well-studied phenomenon in economics literature, is defined as the strong dislike for the violation of trust norms implicit in a relationship between two parties. Through a complex simulation, it contrasts human and algorithmic financial advisors, revealing a significant decrease in betrayal aversion when human experts are replaced by algorithms. This shift indicates a transformative change in the dynamics of AI-mediated environments. The third chapter addresses nomophobia – the fear of being without one's mobile device – in the workplace, quantifying its stress-related effects and impacts on productivity. This investigation not only provides empirical evidence of nomophobia's real-world implications but also underscores the growing interdependence between technology and mental health. Overall, the dissertation integrates interdisciplinary theoretical frameworks and robust empirical methods to delineate the profound and often nuanced implications of the AI-data nexus on human behavior, underscoring the need for a deeper understanding of our relationship with evolving technological landscapes. / Doctor of Philosophy / The massive amounts of data collected online and the smart technologies that use this data often affect the way we make decisions, interact with others, and go about our daily lives. This dissertation explores that relationship, investigating how artificial intelligence (AI) and big data are changing behavior in today's society. In my first study, I examine how individuals respond to high and low risks of sharing their personal information online, specifically under the General Data Protection Regulation (GDPR), a new regulation meant to protect online privacy in the European Union. Surprisingly, the results show that changes enacted by GDPR, such as default choices that automatically select the more privacy-preserving choice, are more effective in settings in which the risk to one's privacy is low. This implies the process in which people decide when and with whom to share information online is more complex than previously thought. In my second study, I shift focus to examine how people follow advice from experts, especially in financial decision contexts. I look specifically at betrayal aversion, a common trend studied in economics, that highlights individuals' unwillingness to trust someone when they fear they might be betrayed. I examine if betrayal aversion changes when human experts are replaced by algorithms. Interestingly, individuals displayed no betrayal aversion when given a financial investment algorithm, showing that non-human experts may have certain benefits for consumers over their human counterparts. Finally, I study a modern phenomenon called 'nomophobia' – the fear of being without your mobile phone – and how it affects people at work. I find that this fear can significantly increase stress, especially as phone-battery level levels decrease. This leads to a reduction in productivity, highlighting how deeply technology is intertwined with our mental health. Overall, this work utilizes a mix of theories and detailed analyses to show the complex and often subtle ways AI and big data are influencing our actions and thoughts. It emphasizes the importance of understanding our relationship with technology as it continues to evolve rapidly.
|
317 |
”What a journey” : Ett autoetnografiskt utforskande av att använda AI-stöddaverktyg under engestaltande processBengtsson, Henrik January 2023 (has links)
I dagens samhälle talas det dagligen i media om AI, både som hot och möjlighet, och det kommer nya verktyg och AI-stöd hela tiden. Men hur är det att faktiskt använda dessa i en gestaltande process? Vilka frågor och problem möter jag som kreatör när jag vill använda mig av AI-stödda verktyg? Kan jag lita på resultatet som jag får från verktygen? Kan jag lita på leverantörerna och de resultat som produceras. Är resultaten representativa, inkluderande och faktabaserade eller behöver jag kritiskt granska och bedöma dem? Genom att ha använt mig av AI-stödda verktyg under en gestaltande process, där jag skapat en berättelse som har visualiserats och sedan tonsatts, så har jag undersökt hur dessa verktyg har påverkat mitt gestaltande och samtidigt undersökt vilka risker och problem som kan vara associerade med användandet av dessa AI-stödda verktyg. Resultaten visar att även om det snabbt går att nå resultat i den gestaltande processen, så är det flera frågor vi behöver ta ställning till.
|
318 |
Towards Explainable AI Using Attribution Methods and Image SegmentationRocks, Garrett J 01 January 2023 (has links) (PDF)
With artificial intelligence (AI) becoming ubiquitous in a broad range of application domains, the opacity of deep learning models remains an obstacle to adaptation within safety-critical systems. Explainable AI (XAI) aims to build trust in AI systems by revealing important inner mechanisms of what has been treated as a black box by human users. This thesis specifically aims to improve the transparency and trustworthiness of deep learning algorithms by combining attribution methods with image segmentation methods. This thesis has the potential to improve the trust and acceptance of AI systems, leading to more responsible and ethical AI applications. An exploratory algorithm called ESAX is introduced and shows how performance greater than other top attribution methods on PIC testing can be achieved in some cases. These results lay a foundation for future work in segmentation attribution.
|
319 |
Contributions to the Interface between Experimental Design and Machine LearningLian, Jiayi 31 July 2023 (has links)
In data science, machine learning methods, such as deep learning and other AI algorithms, have been widely used in many applications. These machine learning methods often have complicated model structures with a large number of model parameters and a set of hyperparameters. Moreover, these machine learning methods are data-driven in nature. Thus, it is not easy to provide a comprehensive evaluation on the performance of these machine learning methods with respect to the data quality and hyper-parameters of the algorithms. In the statistical literature, design of experiments (DoE) is a set of systematical methods to effectively investigate the effects of input factors for the complex systems. There are few works focusing on the use of DoE methodology for evaluating the quality assurance of AI algorithms, while an AI algorithm is naturally a complex system. An understanding of the quality of Artificial Intelligence (AI) algorithms is important for confidently deploying them in real applications such as cybersecurity, healthcare, and autonomous driving. In this proposal, I aim to develop a set of novel methods on the interface between experimental design and machine learning, providing a systematical framework of using DoE methodology for AI algorithms.
This proposal contains six chapters. Chapter 1 provides a general introduction of design of experiments, machine learning, and surrogate modeling. Chapter 2 focuses on investigating the robustness of AI classification algorithms by conducting a comprehensive set of mixture experiments. Chapter 3 proposes a so-called Do-AIQ framework of using DoE for evaluating the AI algorithm’s quality assurance. I establish a design-of-experiment framework to construct an efficient space-filling design in a high-dimensional constraint space and develop an effective surrogate model using additive Gaussian process to enable the quality assessment of AI algorithms. Chapter 4 introduces a framework to generate continual learning (CL) datsets for cybersecurity applications. Chapter 5 presents a variable selection method under cumulative exposure model for time-to-event data with time-varying covariates. Chapter 6 provides the summary of the entire dissertation. / Doctor of Philosophy / Artificial intelligence (AI) techniques, including machine learning and deep learning algorithms, are widely used in various applications in the era of big data. While these algorithms have impressed the public with their remarkable performance, their underlying mechanisms are often highly complex and difficult to interpret. As a result, it becomes challenging to comprehensively evaluate the overall performance and quality of these algorithms. The Design of Experiments (DoE) offers a valuable set of tools for studying and understanding the underlying mechanisms of complex systems, thereby facilitating improvements. DoE has been successfully applied in diverse areas such as manufacturing, agriculture, and healthcare. The use of DoE has played a crucial role in enhancing processes and ensuring high quality. However, there are few works focusing on the use of DoE methodology for evaluating the quality assurance of AI algorithms, where an AI algorithm can be naturally considered as a complex system. This dissertation aims to develop innovative methodologies on the interface between experimental design and machine learning. The research conducted in this dissertation can serve as practical tools to use DoE methodology in the context of AI algorithms.
|
320 |
ChatGPT and the developer's ethical responsibility : A literature study of chatbot-related ethical dilemmas from the developer's perspectiveMeyer, Linda January 2023 (has links)
In this thesis some ethical dilemmas involving conversational agents, with ChatGPT as the foremost example, are presented. Initially, the technology supporting chatbots is described to enable the reader to get insights into their underlying structure. The reader will get an account of recent progress in the development of the technology and gain knowledge of ethical dilemmas from a developer’s perspective. The main goal of this literature study is to achieve an understanding of the current situation and reflect on the developer’s responsibility for buildingethical chatbots. The content of this thesis is further based on previous research in the scientific field of chatbots. This literature study supports the developer with multiple advice. For example, the importance of working with areas such as transparency, UI-design, reliability, accountability, and relativization is highlighted. / I den här litteraturstudien presenteras etiska frågor gällande ”conversational agents” och den senaste tidens utveckling av ChatGPT kommer att stå i centrum för studien. Läsaren får först ta del av en allmän beskrivning av tekniken som ligger till grund för AI-baserade ”chatbots”. Jag redogör för den senaste tidens tekniska utveckling på området samt presentera etiska frågeställningar från programmerarens perspektiv. Det huvudsakliga syftet med uppsatsen är att förmedla en förståelse för den nuvarande situationen och reflektera över utvecklarens ansvar när det kommer till att skapa etiska ”chatbots”. Tidigare forskning om ”conversational agents” står som en grund för reflektioner och diskussioner i den här uppsatsen. Den här litteraturstudien avslutas med flertalet slutsatser som kan fungera som råd till utvecklare. Programmerare bör uppmärksamma frågor som rör transparens, redovisningsansvar och relativisering. Dessutom är det viktigt att ta hänsyn till aspekter som UI-design och reliabilitet.
|
Page generated in 0.035 seconds