• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 107
  • 107
  • 46
  • 38
  • 37
  • 32
  • 28
  • 25
  • 24
  • 23
  • 21
  • 19
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Prompt engineering and its usability to improve modern psychology chatbots / Prompt engineering och dess användbarhet för att förbättra psykologichatbottar

Nordgren, Isak, E. Svensson, Gustaf January 2023 (has links)
As advancements in chatbots and Large Language Models (LLMs) such as GPT-3.5 and GPT-4 continue, their applications in diverse fields, including psychology, expand. This study investigates the effectiveness of LLMs optimized through prompt engineering, aiming to enhance their performance in psychological applications. To this end, two distinct versions of a GPT-3.5-based chatbot were developed: a version similar to the base model, and a version equipped with a more extensive system prompt detailing expected behavior. A panel of professional psychologists evaluated these models based on a predetermined set of questions, providing insight into their potential future use as psychological tools. Our results indicate that an overly prescriptive system prompt can unintentionally limit the versatility of the chatbot, making a careful balance in instruction specificity essential. Furthermore, while our study suggests that current LLMs such as GPT-3.5 are not capable of fully replacing human psychologists, they can provide valuable assistance in tasks such as basic question answering, consolation and validation, and triage. These findings provide a foundation for future research into the effective integration of LLMs in psychology and contribute valuable insights into the promising field of AI-assisted psychological services. / I takt med att framstegen inom chatbots och stora språkmodeller (LLMs) som GPT-3.5 och GPT-4 fortsätter utvidgas deras potentiella tillämpningar inom olika områden, inklusive psykologi. Denna studie undersöker effektiviteten av LLMs optimerade genom prompt engineering, med målet att förbättra deras prestanda inom psykologiska tillämpningar. I detta syfte utvecklades två distinkta versioner av en chatbot baserad på GPT-3.5: en version som liknar bas-modellen, och en version utrustad med en mer omfattande systemprompt som detaljerar förväntat beteende. En panel av professionella psykologer utvärderade dessa modeller baserat på en förbestämd uppsättning frågor, vilket ger inblick i deras potentiella framtida användning som psykologiska verktyg. Våra resultat tyder på att en överdrivet beskrivande systemprompt kan ofrivilligt begränsa chatbotens mångsidighet, vilket kräver en noggrann balans i specificiteten av prompten. Vidare antyder vår studie att nuvarande LLMs som GPT-3.5 inte kan ersätta mänskliga psykologer helt och hållet, men att de kan ge värdefull hjälp i uppgifter som grundläggande frågebesvaring, tröst och bekräftelse, samt triage. Dessa resultat ger en grund för framtida forskning om effektiv integration av LLMs inom psykologi och bidrar med värdefulla insikter till det lovande fältet av AI-assisterade psykologtjänster.
42

Cooperative versus Adversarial Learning: Generating Political Text

Jonsson, Jacob January 2018 (has links)
This thesis aims to evaluate the current state of the art for unconditional text generation and compare established models with novel approaches in the task of generating texts, after being trained on texts written by political parties from the Swedish Riksdag. First, the progression of language modeling from n-gram models and statistical models to neural network models is presented. This is followed by theoretical arguments for the development of adversarial training methods,where a generator neural network tries to fool a discriminator network, trained to distinguish between real and generated sentences. One of the methods in the research frontier diverges from the adversarial idea and instead uses cooperative training, where a mediator network is trained instead of a discriminator. The mediator is then used to estimate a symmetric divergence measure between the true distribution and the generator’s distribution, which is to be minimized in training. A set of experiments evaluates the performance of cooperative training and adversarial training, and finds that they both have advantages and disadvantages. In the experiments, the adversarial training increases the quality of generated texts, while the cooperative training increases the diversity. The findings are in line with the theoretical expectation. / Denna uppsats utvärderar några nyligen föreslagna metoder för obetingad textgenerering, baserade på s.k. “Generative Adversarial Networks” (GANs). Den jämför etablerade modeller med nya metoder för att generera text, efter att ha tränats på texter från de svenska Riksdagspartierna. Utvecklingen av språkmodellering från n-gram-modeller och statistiska modeller till modeller av neurala nätverk presenteras. Detta följs upp av teoretiska argument för utvecklingen av GANs, för vilka ett generatornätverk försöker överlista ett diskriminatornätverk, som tränas skilja mellan riktiga och genererade meningar. En av de senaste metoderna avviker från detta angreppssätt och introducerar istället kooperativ träning, där ett mediatornätverk tränas istället för en diskriminator. Mediatorn används sedan till att uppskatta ett symmetriskt divergensmått mellan den sanna distributionen och generatorns distribution, vilket träningen syftar till att minimera. En serie experiment utvärderar hur GANs och kooperativ träning presterar i förhållande till varandra, och finner att de båda har för- och nackdelar. I experimenten ökar GANs kvaliteten på texterna som genereras, medan kooperativ träning ökar mångfalden. Resultaten motsvarar vad som kan förväntas teoretiskt.
43

Embodied Virtual Reality: The Impacts of Human-Nature Connection During Engineering Design

Trump, Joshua Jordan 19 March 2024 (has links)
The engineering design process can underutilize nature-based solutions during infrastructure development. Instances of nature within the built environment are reflections of the human-nature connection, which may alter how designers ideate solutions to a given design task, especially through virtual reality (VR) as an embodied perspective taking platform. Embodied VR helps designers "see" as an end-user sees, inclusive of the natural environment through the uptake of an avatar, such as a bird or fish. Embodied VR emits empathy toward the avatar, e.g., to see as a bird in VR, one tends to feel and think as a bird. Furthermore, embodied VR also impacts altruistic behavior toward the environment, specifically through proenvironmental behaviors. However, limited research discovers the impact of embodied VR on the human-nature connection and if embodied VR has any impact on how designers ideate, specifically surrounding nature-based solutions as a form of a proenvironmental behavior during the design process. This research first presents a formal measurement of embodied VR's impact on the human-nature connection and maps this impact toward design-related proenvironmental behaviors through design ideas, i. e., tracking changes in nature-based design choices. The design study consisted of three groups of engineering undergraduate students which were given a case study and plan review: a VR group embodying a bird (n=35), a self-lens VR group (n=34), and a control group (n=33). The case study was about a federal mandate to minimize combined sewer overflow in a neighborhood within Cincinnati, OH. Following the plan review, VR groups were given a VR walkthrough or flythrough of the case study area of interest as a selected avatar (embodied:bird, self-lens:oneself). Participants were tested for their connectedness to nature and a mock-design charrette was held to measure engineering design ideas. Verbal protocol analysis was followed, instructing participants to think aloud. Design ideation sessions were recorded and manually transcribed. The results of the study indicated that embodiment impacts the human-nature connection based on participants' perceived connection to nature. Only the bird group witnessed an increase in connectedness to nature, whereas the self-lens and control groups did not report any change. This change in connectedness to nature was also confirmed by engineering design ideas. The bird group was more likely to ideate green-thinking designs to solve the stormwater issue and benefit both nature and socioeconomic conditions, whereas the control group mostly discussed gray designs as the catalyst for minimizing combined sewer overflows. The self-lens group also mentioned green design ideas as well as socioeconomic change, but mostly placed the beneficiary of the design toward people rather than nature in the bird group. The mode of analysis for these findings was driven by thematic content analysis, an exploration of design space as a function of semantic distance, and large language models (LLMs) to synthesize design ideas and themes. An LLM's performance lent accuracy to the design ideas in comparison to thematic content analysis, but struggled to cross-compare groups to provide generalizable findings. This research is intended to benefit the engineering design process with a) the benefit of perspective-taking on design ideas based on lenses of embodied VR and b) various methods to supplement thematic content analysis for coding design ideas. / Doctor of Philosophy / The use of nature in the constructed world, such as rain gardens and natural streams for moving stormwater, is underused during the design process. Virtual reality (VR) programs, like embodiment, have the potential to increase the incorporation of nature and nature-based elements during design. Embodiment is the process of taking on the vantage point of another being or avatar, such as a bird, fish, insect, or other being, in order to see and move as the avatar does. Embodied VR increases the likelihood that the VR participant will act favorably to the subject, specifically when the natural environment is involved. For example, embodying another individual cutting down trees in a virtual forest increased the likelihood that individuals would act favorably to the environment, such as through recycling or conserving energy (Ahn and Bailenson, 2012). Ultimately, this research measures the level of connection participants feel with the environment after an embodied VR experience and motions to discover if this change in connection to nature impacts how participants might design a solution to a problem. This design experiment is based on a case study, which all participants were provided alongside supplemental plan documents of the case. The case study used is about stormwater issues and overflows from infrastructure in a neighborhood in Cincinnati, OH, where key decision-makers were mandated by the federal government to minimize the overflows. The bird group (a bird avatar) performed a fly-through in the area of interest in VR, whereas the self-lens group (first-person, embodying oneself) walked through the same area. The control group received no VR intervention. Following the intervention, participants were asked to re-design the neighborhood and orate their recorded solution. Then, participants were required to score a questionnaire measuring their connectedness to nature. The results show that when people experience the space as a bird in virtual reality, they felt more connected to nature and also included more ideas related to nature in their design. More specifically, ideas involving green infrastructure (using nature-based elements, e.g., rain gardens and streams) and socioeconomic benefits were brought up by the bird group. This research presents embodiment as a tool that can change how engineers design. As stormwater policy has called for more use of green infrastructure (notably, through the Environmental Protection Agency), embodiment may be used during the design process to meet this call from governmental programs. Furthermore, this research impacts how embodiment's effects on design can be interpreted, specifically through quantitative methods through natural language processing and the use of large language models to analyze data and report back on design-related findings. This research is intended to benefit the design process with a) using different avatars in embodiment to impact design ideas and b) a comparison of thematic content analysis and large language models in summarizing design ideas and themes.
44

Few-shot Question Generation with Prompt-based Learning

Wu, Yongchao January 2022 (has links)
Question generation (QG), which automatically generates good-quality questions from a piece of text, is capable of lowering the cost of the manual composition of questions. Recently Question generation has attracted increasing interest for its ability to supply a large number of questions for developing conversation systems and educational applications, as well as corpus development for natural language processing (NLP) research tasks, such as question answering and reading comprehension. Previous neural-based QG approaches have achieved remarkable performance. In contrast, these approaches require a large amount of data to train neural models properly, limiting the application of question generation in low-resource scenarios, e.g. with a few hundred training examples. This thesis aims to address the problem of the low-resource scenario by investigating a recently emerged paradigm of NLP modelling, prompt-based learning. Prompt-based learning, which makes predictions based on the knowledge of the pre-trained language model and some simple textual task descriptions, has shown great effectiveness in various NLP tasks in few-shot and zero-shot settings, in which a few or non-examples are needed to train a model. In this project, we have introduced a prompt-based question generation approach by constructing question generation task instructions that are understandable by a pre-trained sequence-to-sequence language model. Our experiment results show that our approach outperforms previous state-of-the-art question generation models with a vast margin of 36.8%, 204.8%, 455.9%, 1083.3%, 57.9% for metrics BLEU-1, BLEU-2, BLEU-3, BLEU-4, and ROUGE-L respectively in the few-shot learning settings. We also conducted a quality analysis of the generated questions and found that our approach can generate questions with correct grammar and relevant topical information when training with as few as 1,000 training examples.
45

Swedish Cultural Heritage in the Age of AI : Exploring Access, Practices, and Sustainability

Gränglid, Olivia, Ström, Marika January 2023 (has links)
This thesis aims to explore and gain an understanding of the current AI landscape within Swedish Cultural Heritage using purposive interviews with five cultural heritage institutions with ongoing AI projects. This study fills a knowledge gap in the practical implementation of AI at Swedish institutions in addition to the sustainable use of technologies for cultural heritage. The overarching discussion further includes related topics of ethical AI and long-term sustainability, framing it from a perspective of Information Practices and a socio-material entanglement. Findings show that AI technologies can play an important part in cultural heritage, with a range of practical applications if certain issues are overcome. Moreover, the utilisation of AI will increase. The study also indicates a need for regulations, digitisation efforts, and increased investments in resources to adopt the technologies into current practices sustainably. The conclusion highlights a need for the cultural heritage sector to converge and find collectively applicable solutions for implementing AI.
46

Exploring the Genomic Basis of Antibiotic Resistance in Wastewater E. coli: Positive Selection, GWAS, and AI Language Model Analyses

Malekian Boroujeni, Negin 24 October 2023 (has links)
Antibiotic resistance is critical to global health. This thesis examines the relationship between antibiotic resistance and genomic variations in E. coli from wastewater. E. coli is of interest as it causes urinary tract and other infections. Wastewater is a good source because it is a melting pot for E. coli from diverse origins. The research delves into two key aspects: including or excluding antibiotic resistance data and the level of granularity in representing genomic variations. The former is important because there is more genomic data than antibiotic resistance data. Consequently, relying solely on genomic data, this thesis studies positive selection in E. coli to identify mutations and genes favored by evolution. This study demonstrates the preferential selection of known antibiotic resistance genes and mutations, particularly mutations located on functionally important locations of outer membrane porins, and may hence have a direct effect on structure and function. Encouraged by these results, the study was expanded to include antibiotic resistance data and to examine genomic variations at three resolution levels: single mutations, unitigs (genome words) that may contain multiple mutations, and whole coding genome using machine learning classifier models that capture dependencies among multiple mutations and other genomic variations. Representation of single mutations detects well-known resistance mutations as well as potentially novel mechanisms related to biofilm formation and translation. By exploring larger genomic units such as genome words, the analysis confirms the findings from single mutations and additionally uncovers joint mutations in both known and novel genes. Finally, machine learning models, including AI language models, were trained to predict antibiotic resistance based on the whole coding genome. This achieved an accuracy of over 90% in predicting antibiotic resistance when sufficient data were available. Overall, this thesis unveils new antibiotic resistance mechanisms, conducts one of the largest studies of positive selection in E. coli, and stands out as one of the pioneering studies that utilizes AI language models for antibiotic resistance prediction.
47

Towards Building Privacy-Preserving Language Models: Challenges and Insights in Adapting PrivGAN for Generation of Synthetic Clinical Text

Nazem, Atena January 2023 (has links)
The growing development of artificial intelligence (AI), particularly neural networks, is transforming applications of AI in healthcare, yet it raises significant privacy concerns due to potential data leakage. As neural networks memorise training data, they may inadvertently expose sensitive clinical data to privacy breaches, which can engender serious repercussions like identity theft, fraud, and harmful medical errors. While regulations such as GDPR offer safeguards through guidelines, rooted and technical protections are required to address the problem of data leakage. Reviews of various approaches show that one avenue of exploration is the adaptation of Generative Adversarial Networks (GANs) to generate synthetic data for use in place of real data. Since GANs were originally designed and mainly researched for generating visual data, there is a notable gap for further exploration of adapting GANs with privacy-preserving measures for generating synthetic text data. Thus, to address this gap, this study aims at answering the research questions of how a privacy-preserving GAN can be adapted to safeguard the privacy of clinical text data and what challenges and potential solutions are associated with these adaptations. To this end, the existing privGAN framework—originally developed and tested for image data—was tailored to suit clinical text data. Following the design science research framework, modifications were made while adhering to the privGAN architecture to incorporate reinforcement learning (RL) for addressing the discrete nature of text data. For synthetic data generation, this study utilised the 'Discharge summary' class from the Noteevents table of the MIMIC-III dataset, which is clinical text data in American English. The utility of the generated data was assessed using the BLEU-4 metric, and a white-box attack was conducted to test the model's resistance to privacy breaches. The experiment yielded a very low BLEU-4 score, indicating that the generator could not produce synthetic data that would capture the linguistic characteristics and patterns of real data. The relatively low white-box attack accuracy of one discriminator (0.2055) suggests that the trained discriminator was less effective in inferring sensitive information with high accuracy. While this may indicate a potential for preserving privacy, increasing the number of discriminators proves less favourable results (0.361). In light of these results, it is noted that the adapted approach in defining the rewards as a measure of discriminators’ uncertainty can signal a contradicting learning strategy and lead to the low utility of data. This study underscores the challenges in adapting privacy-preserving GANs for text data due to the inherent complexity of GANs training and the required computational power. To obtain better results in terms of utility and confirm the effectiveness of the privacy measures, further experiments are required to consider a more direct and granular rewarding system for the generator and to obtain an optimum learning rate. As such, the findings reiterate the necessity for continued experimentation and refinement in adapting privacy-preserving GANs for clinical text.
48

Fine-tuning a LLM using Reinforcement Learning from Human Feedback for a Therapy Chatbot Application / Finjustering av en LLM med hjälp av förstärkande inlärning från mänsklig återkoppling (eng. RLHF) för en Psykolog-chatbot applikation

Bill, Desirée, Eriksson, Theodor January 2023 (has links)
The field of AI and machine learning has seen exponential growth in the last decade and even more so in the recent year with the considerable public interest in Large Language models (LLMs) such as chat-GPT. LLMs can be used for several purposes, but one possible application would be fine-tuning a model to perform a particular function in a specific field. The goal is therefore fine-tuning a LLM in the field of psychology using a new method called Reinforcement Learning from Human Feedback to determine if it is a viable method in such cases. The theory behind LLMs and RLHF as well as the ethical perspective on developing a psychological AI is presented. Previous studies on both RLHF and AI in psychology are presented, showing the goal is feasible. Then the method is explained for both training and evaluating the model which is done by comparing a pre-trained model with the fine-tuned one. The study is considered scientifically relevant as RLHF has been used to fine-tune LLMs earlier, but has not been done with the intent to make it more specified in a field. The result did not show any clear difference between the pre-trained and the fine-tuned model therefore, more tests are required. However, with the limitations regarding hardware, time to train, and available data, there is much improvement needed for future studies. An ethical framework applied to a digital psychology assistant is discussed and a suitable introduction to the market and division of responsibilities is proposed. / Området AI och maskininlärning har sett exponentiell tillväxt under det senaste decenniet och ännu mer under det senaste året med det stora allmänintresset för stora språkmodeller som chat-GPT. Stora språkmodeller kan användas till flera saker där en möjlig tillämpning är att finjustera en modell för att fylla en viss funktion inom ett specifikt yrke. Målet med arbetet är därför att finjustera en språkmodell inom området psykologi med hjälp av en ny metod kallad Reinforcement Learning from Human Feedback för att undersöka metodens tillämplighet. Teorin bakom stora språkmodeller och RLHF samt det etiska perspektivet på att utveckla en digital psykologi assistent förklaras. Därefter presenteras tidigare studier om både RLHF och AI inom psykologi som visar att målet är genomförbart. Metoden för att både träna och utvärdera modellen förklaras som görs genom att jämföra den förtränade modellen med den finjusterade. Studien bedöms som vetenskapligt relevant även fast RLHF har använts för att finjustera språkmodeller tidigare, har det inte gjorts med målet att finjustera en språkmodell till ett visst yrke. Resultatet visade inte på någon tydlig skillnad mellan den förtränade och den finjusterade modellen, därför krävs fler tester krävs. Men med de begräsningar som fanns gällande hårdvara, tid att träna och tillgänglig data är det mycket som kan förbättras i framtida studier. Det etiska ramverket applicerat på en digital psykologi assistent diskuteras och en lämplig introduktion till marknaden och ansvarsfördelning föreslås.
49

An In-Depth study on the Utilization of Large Language Models for Test Case Generation

Johnsson, Nicole January 2024 (has links)
This study investigates the utilization of Large Language Models for Test Case Generation. The study uses the Large Language model and Embedding model provided by Llama, specifically Llama2 of size 7B, to generate test cases given a defined input. The study involves an implementation that uses customization techniques called Retrieval Augmented Generation (RAG) and Prompt Engineering. RAG is a method that in this study, stores organisation information locally, which is used to create test cases. This stored data is used as complementary data apart from the pre-trained data that the large language model has already trained on. By using this method, the implementation can gather specific organisation data and therefore have a greater understanding of the required domains. The objective of the study is to investigate how AI-driven test case generation impacts the overall software quality and development efficiency. This is evaluated by comparing the output of the AI-based system, to manually created test cases, as this is the company standard at the time of the study. The AI-driven test cases are analyzed mainly in the form of coverage and time, meaning that we compare to which degree the AI system can generate test cases compared to the manually created test case. Likewise, time is taken into consideration to understand how the development efficiency is affected. The results reveal that by using Retrieval Augmented Generationin combination with Prompt Engineering, the system is able to identify test cases to a certain degree. The results show that 66.67% of a specific project was identified using the AI, however, minor noise could appear and results might differ depending on the project’s complexity. Overall the results revealed how the system can positively impact the development efficiency and could also be argued to have a positive effect on the software quality. However, it is important to understand that the implementation as its current stage, is not sufficient enough to be used independently, but should rather be used as a tool to more efficiently create test cases.
50

Exploring artificial intelligence bias : a comparative study of societal bias patterns in leading AI-powered chatbots.

Udała, Katarzyna Agnieszka January 2023 (has links)
The development of artificial intelligence (AI) has revolutionised the way we interact with technology and each other, both in society and in professional careers. Although they come with great potential for productivity and automation, AI systems have been found to exhibit biases that reflect and perpetuate existing societal inequalities. With the recent rise of artificial intelligence tools exploiting the large language model (LLM) technology, such as ChatGPT, Bing Chat and Bard AI, this research project aims to investigate the extent of AI bias in said tools and explore its ethical implications. By reviewing and analysing responses to carefully crafted prompts generated by three different AI chatbot tools, the author will intend to determine whether the content generated by these tools indeed exhibits patterns of bias related to various social identities, as well as compare the extent to which such bias is present across all three tools. This study will contribute to the growing body of literature on AI ethics and inform efforts to develop more equitable and inclusive AI systems. By exploring the ethical dimensions of AI bias in selected LLMs, this research will shed light on the broader societal implications of AI and the role of technology in shaping our future.

Page generated in 0.0635 seconds