1 |
Swedish Cultural Heritage in the Age of AI : Exploring Access, Practices, and SustainabilityGränglid, Olivia, Ström, Marika January 2023 (has links)
This thesis aims to explore and gain an understanding of the current AI landscape within Swedish Cultural Heritage using purposive interviews with five cultural heritage institutions with ongoing AI projects. This study fills a knowledge gap in the practical implementation of AI at Swedish institutions in addition to the sustainable use of technologies for cultural heritage. The overarching discussion further includes related topics of ethical AI and long-term sustainability, framing it from a perspective of Information Practices and a socio-material entanglement. Findings show that AI technologies can play an important part in cultural heritage, with a range of practical applications if certain issues are overcome. Moreover, the utilisation of AI will increase. The study also indicates a need for regulations, digitisation efforts, and increased investments in resources to adopt the technologies into current practices sustainably. The conclusion highlights a need for the cultural heritage sector to converge and find collectively applicable solutions for implementing AI.
|
2 |
Exploring artificial intelligence bias : a comparative study of societal bias patterns in leading AI-powered chatbots.Udała, Katarzyna Agnieszka January 2023 (has links)
The development of artificial intelligence (AI) has revolutionised the way we interact with technology and each other, both in society and in professional careers. Although they come with great potential for productivity and automation, AI systems have been found to exhibit biases that reflect and perpetuate existing societal inequalities. With the recent rise of artificial intelligence tools exploiting the large language model (LLM) technology, such as ChatGPT, Bing Chat and Bard AI, this research project aims to investigate the extent of AI bias in said tools and explore its ethical implications. By reviewing and analysing responses to carefully crafted prompts generated by three different AI chatbot tools, the author will intend to determine whether the content generated by these tools indeed exhibits patterns of bias related to various social identities, as well as compare the extent to which such bias is present across all three tools. This study will contribute to the growing body of literature on AI ethics and inform efforts to develop more equitable and inclusive AI systems. By exploring the ethical dimensions of AI bias in selected LLMs, this research will shed light on the broader societal implications of AI and the role of technology in shaping our future.
|
3 |
Machine learning for complex evaluation and detection of combustion health of Industrial Gas turbinesMshaleh, Mohammad January 2024 (has links)
This study addresses the challenge of identifying anomalies within multivariate time series data, focusing specifically on the operational parameters of gas turbine combustion systems. In search of an effective detection method, the research explores the application of three distinct machine learning methods: the Long Short-Term Memory (LSTM) autoencoder, the Self-Organizing Map (SOM), and the Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Through the experiment, these models are evaluated to determine their efficacy in anomaly detection. The findings show that the LSTM autoencoder not only surpasses its counterparts in performance metrics but also shows a unique capability to identify the underlying causes of detected anomalies. This paper delves into the comparative analysis of these techniques and discusses the implications of the models in maintaining the reliability and safety of gas turbine operations.
|
4 |
Finding differences in perspectives between designers and engineers to develop trustworthyAI for autonomous carsLarsson, Karl Rikard, Jönelid, Gustav January 2023 (has links)
In the context of designing and implementing ethical Artificial Intelligence (AI), varying perspectives exist regarding developing trustworthy AI for autonomous cars. This study sheds light on the differences in perspectives and provides recommendations to minimize such divergences. By exploring the diverse viewpoints, we identify key factors contributing to the differences and propose strategies to bridge the gaps. This study goes beyond the trolley problem to visualize the complex challenges of trustworthy and ethical AI. Three pillars of trustworthy AI have been defined: transparency, reliability, and safety. This research contributes to the field of trustworthy AI for autonomous cars, providing practical recommendations to enhance the development of AI systems that prioritize both technological advancement and ethical principles.
|
5 |
Artificial Intelligence - Are there any social obstacles? : An empirical study of social obstacles / Artificiell Intelligens - Finns det några sociala hinder? : En empirisk studie av sociala hinderLiliequist, Erik January 2018 (has links)
Artificial Intelligence is currently one of the most talked about topics with regard to technical development. The possibilities are enormous and it might revolutionize how we live our lives. There are talk of robots and AI removing the need for human workers. At the same time there are also those who view this as deeply troublesome. Either from an individual perspective, asking the question what we should do once we do not need to work more? Or from an existential perspective, raising issues of what responsibilities we have as humans and what it means to be human? This study does not aim to answer these grand questions, but rather shift the focus to the near future of three to five years. Yet, there is still a focus on the social aspects of the development of AI. What are the perceived greatest social issues and obstacles for a continued implementation of AI solutions in society? To answer these question interviews have been conducted with representatives for the Swedish society, ranging from politicians, union and employers’ organizations to philosophers and AI researchers. Further a literature study has been made of similar studies, comparing and reflecting their findings with the views of the interviewees. In short, the interviewees have a very positive view of AI in the near future, believing that a continued implementation would go relatively smoothly. Yet, they pointed to a few key obstacles that might need to be addressed. Mainly there is a risk of increased polarization of wages and power due to AI, although stressed that it depends on how we use the technology rather than the technology itself. Another obstacle was connected to individual uncertainty of the development of AI, causing a fear of what might happen. Further, several different ethical issues were raised. There was an agreement that we need to address these as soon as possible, but they did not view this as an obstacle.
|
6 |
Self-Reflection on Chain-of-Thought Reasoning in Large Language Models / Självreflektion över Chain-of-Thought-resonerande i stora språkmodellerPraas, Robert January 2023 (has links)
A strong capability of large language models is Chain-of-Thought reasoning. Prompting a model to ‘think step-by-step’ has led to great performance improvements in solving problems such as planning and question answering, and with the extended output it provides some evidence about the rationale behind an answer or decision. In search of better, more robust, and interpretable language model behavior, this work investigates self-reflection in large language models. Here, self-reflection consists of feedback from large language models to medical question-answering and whether the feedback can be used to accurately distinguish between correct and incorrect answers. GPT-3.5-Turbo and GPT-4 provide zero-shot feedback scores to Chain-of-Thought reasoning on the MedQA (medical questionanswering) dataset. The question-answering is evaluated on traits such as being structured, relevant and consistent. We test whether the feedback scores are different for questions that were either correctly or incorrectly answered by Chain-of-Thought reasoning. The potential differences in feedback scores are statistically tested with the Mann-Whitney U test. Graphical visualization and logistic regressions are performed to preliminarily determine whether the feedback scores are indicative to whether the Chain-of-Thought reasoning leads to the right answer. The results indicate that among the reasoning objectives, the feedback models assign higher feedback scores to questions that were answered correctly than those that were answered incorrectly. Graphical visualization shows potential for reviewing questions with low feedback scores, although logistic regressions that aimed to predict whether or not questions were answered correctly mostly defaulted to the majority class. Nonetheless, there seems to be a possibility for more robust output from self-reflecting language systems. / En stark förmåga hos stora språkmodeller är Chain-of-Thought-resonerande. Att prompta en modell att tänka stegvis har lett till stora prestandaförbättringar vid lösandet av problem som planering och frågebesvarande, och med den utökade outputen ger det en del bevis rörande logiken bakom ett svar eller beslut. I sökandet efter bättre, mer robust och tolk bart beteende hos språkmodeller undersöker detta arbete självreflektion i stora språkmodeller. Forskningsfrågan är: I vilken utsträckning kan feedback från stora språkmodeller, såsom GPT-3.5-Turbo och GPT-4, på ett korrekt sätt skilja mellan korrekta och inkorrekta svar i medicinska frågebesvarande uppgifter genom användningen av Chainof-Thought-resonerande? Här ger GPT-3.5-Turbo och GPT-4 zero-shot feedback-poäng till Chain-ofThought-resonerande på datasetet för MedQA (medicinskt frågebesvarande). Frågebesvarandet bör vara strukturerat, relevant och konsekvent. Feedbackpoängen jämförs mellan två grupper av frågor, baserat på om dessa besvarades korrekt eller felaktigt i första hand. Statistisk testning genomförs på skillnaden i feedback-poäng med Mann-Whitney U-testet. Grafisk visualisering och logistiska regressioner utförs för att preliminärt avgöra om feedbackpoängen är indikativa för huruvida Chainof-Thought-resonerande leder till rätt svar. Resultaten indikerar att bland resonemangsmålen tilldelar feedbackmodellerna fler positiva feedbackpoäng till frågor som besvarats korrekt än de som besvarats felaktigt. Grafisk visualisering visar potential för granskandet av frågor med låga feedbackpoäng, även om logistiska regressioner som syftade till att förutsäga om frågorna besvarades korrekt eller inte för det mesta majoritetsklassen. Icke desto mindre verkar det finnas potential för robustare från självreflekterande språksystem.
|
7 |
Taking Responsible AI from Principle to Practice : A study of challenges when implementing Responsible AI guidelines in an organization and how to overcome themHedlund, Matilda, Henriksson, Hanna January 2023 (has links)
The rapid advancement of AI technology emphasizes the importance of developing practical and ethical frameworks to guide its evolution and deployment in a responsible manner. In light of more complex AI and its capacity to influence society, AI researchers and other prominent individuals are now indicating that AI evolution has to be regulated to a greater extent. This study examines the practical implementation of Responsible AI guidelines in an organization by investigating the challenges encountered and proposing solutions to overcome them. Previous research has primarily focused on conceptualizing Responsible AI guidelines, resulting in a tremendous number of abstract and high-level recommendations. However, there is an emerging demand to shift the focus toward studying the practical implementation of these. This study addresses the research question: ‘How can an organization overcome challenges that may arise when implementing Responsible AI guidelines in practice?’. The study utilizes the guidelines produced by the European Commission’s High-Level Expert Group on AI as a reference point, considering their influence on shaping future AI policy and regulation in the EU. The study is conducted in collaboration with the telecommunications company Ericsson, which henceforth will be referred to as 'the case organization’, which possesses a large global workforce and headquarters in Sweden. Specific focus is delineated to the department that works on developing AI internally for other units with the purpose of simplifying operations and processes, which henceforth in this study will be referred to as 'the AI unit'. Through an inductive interpretive approach, data from 16 semi-structured interviews and organization-specific documents were analyzed through a thematic analysis. The findings reveal challenges related to (1) understanding and defining Responsible AI, (2) technical conditions and complexity, (3) organizational structures and barriers, as well as (4) inconsistent and overlooked ethics. Proposed solutions include (1) education and awareness, (2) integration and implementation, (3) governance and accountability, and (4) alignment and values. The findings contribute to a deeper understanding of Responsible AI implementation and offer practical recommendations for organizations navigating the rapidly evolving landscape of AI technology.
|
8 |
Tools and Methods for Companies to Build Transparent and Fair Machine Learning Systems / Verktyg och metoder för företag att utveckla transparenta och rättvisa maskininlärningssystemSchildt, Alexandra, Luo, Jenny January 2020 (has links)
AI has quickly grown from being a vast concept to an emerging technology that many companies are looking to integrate into their businesses, generally considered an ongoing “revolution” transforming science and society altogether. Researchers and organizations agree that AI and the recent rapid developments in machine learning carry huge potential benefits. At the same time, there is an increasing worry that ethical challenges are not being addressed in the design and implementation of AI systems. As a result, AI has sparked a debate about what principles and values should guide its development and use. However, there is a lack of consensus about what values and principles should guide the development, as well as what practical tools should be used to translate such principles into practice. Although researchers, organizations and authorities have proposed tools and strategies for working with ethical AI within organizations, there is a lack of a holistic perspective, tying together the tools and strategies proposed in ethical, technical and organizational discourses. The thesis aims to contribute with knowledge to bridge this gap by addressing the following purpose: to explore and present the different tools and methods companies and organizations should have in order to build machine learning applications in a fair and transparent manner. The study is of qualitative nature and data collection was conducted through a literature review and interviews with subject matter experts. In our findings, we present a number of tools and methods to increase fairness and transparency. Our findings also show that companies should work with a combination of tools and methods, both outside and inside the development process, as well as in different stages of the machine learning development process. Tools used outside the development process, such as ethical guidelines, appointed roles, workshops and trainings, have positive effects on alignment, engagement and knowledge while providing valuable opportunities for improvement. Furthermore, the findings suggest that it is crucial to translate high-level values into low-level requirements that are measurable and can be evaluated against. We propose a number of pre-model, in-model and post-model techniques that companies can and should implement in each other to increase fairness and transparency in their machine learning systems. / AI har snabbt vuxit från att vara ett vagt koncept till en ny teknik som många företag vill eller är i färd med att implementera. Forskare och organisationer är överens om att AI och utvecklingen inom maskininlärning har enorma potentiella fördelar. Samtidigt finns det en ökande oro för att utformningen och implementeringen av AI-system inte tar de etiska riskerna i beaktning. Detta har triggat en debatt kring vilka principer och värderingar som bör vägleda AI i dess utveckling och användning. Det saknas enighet kring vilka värderingar och principer som bör vägleda AI-utvecklingen, men också kring vilka praktiska verktyg som skall användas för att implementera dessa principer i praktiken. Trots att forskare, organisationer och myndigheter har föreslagit verktyg och strategier för att arbeta med etiskt AI inom organisationer, saknas ett helhetsperspektiv som binder samman de verktyg och strategier som föreslås i etiska, tekniska och organisatoriska diskurser. Rapporten syftar till överbrygga detta gap med följande syfte: att utforska och presentera olika verktyg och metoder som företag och organisationer bör ha för att bygga maskininlärningsapplikationer på ett rättvist och transparent sätt. Studien är av kvalitativ karaktär och datainsamlingen genomfördes genom en litteraturstudie och intervjuer med ämnesexperter från forskning och näringsliv. I våra resultat presenteras ett antal verktyg och metoder för att öka rättvisa och transparens i maskininlärningssystem. Våra resultat visar också att företag bör arbeta med en kombination av verktyg och metoder, både utanför och inuti utvecklingsprocessen men också i olika stadier i utvecklingsprocessen. Verktyg utanför utvecklingsprocessen så som etiska riktlinjer, utsedda roller, workshops och utbildningar har positiva effekter på engagemang och kunskap samtidigt som de ger värdefulla möjligheter till förbättringar. Dessutom indikerar resultaten att det är kritiskt att principer på hög nivå översätts till mätbara kravspecifikationer. Vi föreslår ett antal verktyg i pre-model, in-model och post-model som företag och organisationer kan implementera för att öka rättvisa och transparens i sina maskininlärningssystem.
|
9 |
AI, beslutsstöd och kampen mot antibiotikaresistens : En scoping review / AI, decision support and the fight against antibiotic resistance : A scoping reviewGanebo Eriksson, Elin, Sjögren, Malin January 2024 (has links)
Introduktion: Antibiotikaresistens är ett allvarligt och komplext folkhälsoproblem. WHO uppmanar genom initiativet One health att antibiotikaresistens behöver ses holistiskt och att tvärdisciplinära lösningar krävs. AI och maskininlärning bedöms ha stor potential att användas inom beslutsstöd för att begränsa antibiotikaresistensen. För att AI ska våga användas och implementeras bör den vara tillförlitlig, vilket innebär att hänsyn till etiska aspekter bör tas under hela systemens livscykel. Trots förhoppningar kring AI:s potential är forskningsfältet ungt och det beskrivs svårigheter med att utföra systematiska litteraturstudier. Det kan därför finnas behov av studier av kartläggande karaktär. Syfte: Syftet var att kartlägga rådande kunskapsläge kring hur artificiell intelligens kan användas som beslutsstöd i arbetet med att begränsa antibiotikaresistens. Metod: En kvalitativ scoping review med en induktiv tematisk analys. Resultat: Maskininlärning, såsom AI, användes för att utveckla beslutsstöd tänkta att implementeras i klinisk miljö. De hade i regel som avsikt att på olika sätt och i olika grad förutse viktiga aspekter i ett vårdförlopp som kan hjälpa vårdpersonal att välja en individanpassad antibiotikabehandling. Förhoppningarna med tekniken motiverades med en rad olika teoretiska nyttor, men de reala nyttorna kunde i regel inte konstateras inom ramen för studierna. Slutsats: För att konstatera och kunna fördela nyttan krävs vidare forskning som tar hänsyn till etisk AI. / Introduction: Antibiotic resistance is a serious and complex public health issue. Through the One Health initiative, WHO calls for a holistic approach to antibiotic resistance and for interdisciplinary solutions. AI and machine learning are considered to have great potential for use in decision support to limit antibiotic resistance. For AI to be used and implemented, it should be reliable, which means that ethical aspects should be considered throughout the life cycle of the systems. Despite hopes for the potential of AI, the research field is young and difficulties are present in conducting systematic literature studies. There may therefore be a need for studies of a mapping nature. Purpose: The purpose was to map the current state of knowledge on how artificial intelligence can be used as decision support in the efforts to limit antibiotic resistance. Method: A qualitative scoping review with an inductive thematic analysis. Results: Machine learning, such as AI, was used to develop decision support intended to be implemented in clinical settings. They generally aimed to predict important aspects of the course of care that could help healthcare professionals choose an individualized antibiotic treatment. The potential of the technology is justified by a variety of theoretical benefits, but the real benefits could not be ascertained in the context of the studies. Conclusion: Further research is needed to establish and distribute the benefits while also considering ethical AI.
|
Page generated in 0.0381 seconds