31 |
Embodied Virtual Reality: The Impacts of Human-Nature Connection During Engineering DesignTrump, Joshua Jordan 19 March 2024 (has links)
The engineering design process can underutilize nature-based solutions during infrastructure development. Instances of nature within the built environment are reflections of the human-nature connection, which may alter how designers ideate solutions to a given design task, especially through virtual reality (VR) as an embodied perspective taking platform. Embodied VR helps designers "see" as an end-user sees, inclusive of the natural environment through the uptake of an avatar, such as a bird or fish. Embodied VR emits empathy toward the avatar, e.g., to see as a bird in VR, one tends to feel and think as a bird. Furthermore, embodied VR also impacts altruistic behavior toward the environment, specifically through proenvironmental behaviors. However, limited research discovers the impact of embodied VR on the human-nature connection and if embodied VR has any impact on how designers ideate, specifically surrounding nature-based solutions as a form of a proenvironmental behavior during the design process. This research first presents a formal measurement of embodied VR's impact on the human-nature connection and maps this impact toward design-related proenvironmental behaviors through design ideas, i. e., tracking changes in nature-based design choices.
The design study consisted of three groups of engineering undergraduate students which were given a case study and plan review: a VR group embodying a bird (n=35), a self-lens VR group (n=34), and a control group (n=33). The case study was about a federal mandate to minimize combined sewer overflow in a neighborhood within Cincinnati, OH. Following the plan review, VR groups were given a VR walkthrough or flythrough of the case study area of interest as a selected avatar (embodied:bird, self-lens:oneself). Participants were tested for their connectedness to nature and a mock-design charrette was held to measure engineering design ideas. Verbal protocol analysis was followed, instructing participants to think aloud. Design ideation sessions were recorded and manually transcribed.
The results of the study indicated that embodiment impacts the human-nature connection based on participants' perceived connection to nature. Only the bird group witnessed an increase in connectedness to nature, whereas the self-lens and control groups did not report any change. This change in connectedness to nature was also confirmed by engineering design ideas. The bird group was more likely to ideate green-thinking designs to solve the stormwater issue and benefit both nature and socioeconomic conditions, whereas the control group mostly discussed gray designs as the catalyst for minimizing combined sewer overflows. The self-lens group also mentioned green design ideas as well as socioeconomic change, but mostly placed the beneficiary of the design toward people rather than nature in the bird group. The mode of analysis for these findings was driven by thematic content analysis, an exploration of design space as a function of semantic distance, and large language models (LLMs) to synthesize design ideas and themes. An LLM's performance lent accuracy to the design ideas in comparison to thematic content analysis, but struggled to cross-compare groups to provide generalizable findings. This research is intended to benefit the engineering design process with a) the benefit of perspective-taking on design ideas based on lenses of embodied VR and b) various methods to supplement thematic content analysis for coding design ideas. / Doctor of Philosophy / The use of nature in the constructed world, such as rain gardens and natural streams for moving stormwater, is underused during the design process. Virtual reality (VR) programs, like embodiment, have the potential to increase the incorporation of nature and nature-based elements during design. Embodiment is the process of taking on the vantage point of another being or avatar, such as a bird, fish, insect, or other being, in order to see and move as the avatar does. Embodied VR increases the likelihood that the VR participant will act favorably to the subject, specifically when the natural environment is involved. For example, embodying another individual cutting down trees in a virtual forest increased the likelihood that individuals would act favorably to the environment, such as through recycling or conserving energy (Ahn and Bailenson, 2012). Ultimately, this research measures the level of connection participants feel with the environment after an embodied VR experience and motions to discover if this change in connection to nature impacts how participants might design a solution to a problem.
This design experiment is based on a case study, which all participants were provided alongside supplemental plan documents of the case. The case study used is about stormwater issues and overflows from infrastructure in a neighborhood in Cincinnati, OH, where key decision-makers were mandated by the federal government to minimize the overflows. The bird group (a bird avatar) performed a fly-through in the area of interest in VR, whereas the self-lens group (first-person, embodying oneself) walked through the same area. The control group received no VR intervention. Following the intervention, participants were asked to re-design the neighborhood and orate their recorded solution. Then, participants were required to score a questionnaire measuring their connectedness to nature. The results show that when people experience the space as a bird in virtual reality, they felt more connected to nature and also included more ideas related to nature in their design. More specifically, ideas involving green infrastructure (using nature-based elements, e.g., rain gardens and streams) and socioeconomic benefits were brought up by the bird group.
This research presents embodiment as a tool that can change how engineers design. As stormwater policy has called for more use of green infrastructure (notably, through the Environmental Protection Agency), embodiment may be used during the design process to meet this call from governmental programs. Furthermore, this research impacts how embodiment's effects on design can be interpreted, specifically through quantitative methods through natural language processing and the use of large language models to analyze data and report back on design-related findings. This research is intended to benefit the design process with a) using different avatars in embodiment to impact design ideas and b) a comparison of thematic content analysis and large language models in summarizing design ideas and themes.
|
32 |
Swedish Cultural Heritage in the Age of AI : Exploring Access, Practices, and SustainabilityGränglid, Olivia, Ström, Marika January 2023 (has links)
This thesis aims to explore and gain an understanding of the current AI landscape within Swedish Cultural Heritage using purposive interviews with five cultural heritage institutions with ongoing AI projects. This study fills a knowledge gap in the practical implementation of AI at Swedish institutions in addition to the sustainable use of technologies for cultural heritage. The overarching discussion further includes related topics of ethical AI and long-term sustainability, framing it from a perspective of Information Practices and a socio-material entanglement. Findings show that AI technologies can play an important part in cultural heritage, with a range of practical applications if certain issues are overcome. Moreover, the utilisation of AI will increase. The study also indicates a need for regulations, digitisation efforts, and increased investments in resources to adopt the technologies into current practices sustainably. The conclusion highlights a need for the cultural heritage sector to converge and find collectively applicable solutions for implementing AI.
|
33 |
Fine-tuning a LLM using Reinforcement Learning from Human Feedback for a Therapy Chatbot Application / Finjustering av en LLM med hjälp av förstärkande inlärning från mänsklig återkoppling (eng. RLHF) för en Psykolog-chatbot applikationBill, Desirée, Eriksson, Theodor January 2023 (has links)
The field of AI and machine learning has seen exponential growth in the last decade and even more so in the recent year with the considerable public interest in Large Language models (LLMs) such as chat-GPT. LLMs can be used for several purposes, but one possible application would be fine-tuning a model to perform a particular function in a specific field. The goal is therefore fine-tuning a LLM in the field of psychology using a new method called Reinforcement Learning from Human Feedback to determine if it is a viable method in such cases. The theory behind LLMs and RLHF as well as the ethical perspective on developing a psychological AI is presented. Previous studies on both RLHF and AI in psychology are presented, showing the goal is feasible. Then the method is explained for both training and evaluating the model which is done by comparing a pre-trained model with the fine-tuned one. The study is considered scientifically relevant as RLHF has been used to fine-tune LLMs earlier, but has not been done with the intent to make it more specified in a field. The result did not show any clear difference between the pre-trained and the fine-tuned model therefore, more tests are required. However, with the limitations regarding hardware, time to train, and available data, there is much improvement needed for future studies. An ethical framework applied to a digital psychology assistant is discussed and a suitable introduction to the market and division of responsibilities is proposed. / Området AI och maskininlärning har sett exponentiell tillväxt under det senaste decenniet och ännu mer under det senaste året med det stora allmänintresset för stora språkmodeller som chat-GPT. Stora språkmodeller kan användas till flera saker där en möjlig tillämpning är att finjustera en modell för att fylla en viss funktion inom ett specifikt yrke. Målet med arbetet är därför att finjustera en språkmodell inom området psykologi med hjälp av en ny metod kallad Reinforcement Learning from Human Feedback för att undersöka metodens tillämplighet. Teorin bakom stora språkmodeller och RLHF samt det etiska perspektivet på att utveckla en digital psykologi assistent förklaras. Därefter presenteras tidigare studier om både RLHF och AI inom psykologi som visar att målet är genomförbart. Metoden för att både träna och utvärdera modellen förklaras som görs genom att jämföra den förtränade modellen med den finjusterade. Studien bedöms som vetenskapligt relevant även fast RLHF har använts för att finjustera språkmodeller tidigare, har det inte gjorts med målet att finjustera en språkmodell till ett visst yrke. Resultatet visade inte på någon tydlig skillnad mellan den förtränade och den finjusterade modellen, därför krävs fler tester krävs. Men med de begräsningar som fanns gällande hårdvara, tid att träna och tillgänglig data är det mycket som kan förbättras i framtida studier. Det etiska ramverket applicerat på en digital psykologi assistent diskuteras och en lämplig introduktion till marknaden och ansvarsfördelning föreslås.
|
34 |
An In-Depth study on the Utilization of Large Language Models for Test Case GenerationJohnsson, Nicole January 2024 (has links)
This study investigates the utilization of Large Language Models for Test Case Generation. The study uses the Large Language model and Embedding model provided by Llama, specifically Llama2 of size 7B, to generate test cases given a defined input. The study involves an implementation that uses customization techniques called Retrieval Augmented Generation (RAG) and Prompt Engineering. RAG is a method that in this study, stores organisation information locally, which is used to create test cases. This stored data is used as complementary data apart from the pre-trained data that the large language model has already trained on. By using this method, the implementation can gather specific organisation data and therefore have a greater understanding of the required domains. The objective of the study is to investigate how AI-driven test case generation impacts the overall software quality and development efficiency. This is evaluated by comparing the output of the AI-based system, to manually created test cases, as this is the company standard at the time of the study. The AI-driven test cases are analyzed mainly in the form of coverage and time, meaning that we compare to which degree the AI system can generate test cases compared to the manually created test case. Likewise, time is taken into consideration to understand how the development efficiency is affected. The results reveal that by using Retrieval Augmented Generationin combination with Prompt Engineering, the system is able to identify test cases to a certain degree. The results show that 66.67% of a specific project was identified using the AI, however, minor noise could appear and results might differ depending on the project’s complexity. Overall the results revealed how the system can positively impact the development efficiency and could also be argued to have a positive effect on the software quality. However, it is important to understand that the implementation as its current stage, is not sufficient enough to be used independently, but should rather be used as a tool to more efficiently create test cases.
|
35 |
Exploring artificial intelligence bias : a comparative study of societal bias patterns in leading AI-powered chatbots.Udała, Katarzyna Agnieszka January 2023 (has links)
The development of artificial intelligence (AI) has revolutionised the way we interact with technology and each other, both in society and in professional careers. Although they come with great potential for productivity and automation, AI systems have been found to exhibit biases that reflect and perpetuate existing societal inequalities. With the recent rise of artificial intelligence tools exploiting the large language model (LLM) technology, such as ChatGPT, Bing Chat and Bard AI, this research project aims to investigate the extent of AI bias in said tools and explore its ethical implications. By reviewing and analysing responses to carefully crafted prompts generated by three different AI chatbot tools, the author will intend to determine whether the content generated by these tools indeed exhibits patterns of bias related to various social identities, as well as compare the extent to which such bias is present across all three tools. This study will contribute to the growing body of literature on AI ethics and inform efforts to develop more equitable and inclusive AI systems. By exploring the ethical dimensions of AI bias in selected LLMs, this research will shed light on the broader societal implications of AI and the role of technology in shaping our future.
|
36 |
Preserving Knowledge in Power Line Engineering with Language Models and DesignGötling, Axel January 2024 (has links)
The loss of senior expertise in power line design poses a critical challenge to the sustainable energy transition. Current methods of knowledge transfer fail to prevent the loss of invaluable knowledge necessary for future junior power line designers. Additionally, the rise of informal deployment of generative language models may also threaten to bury hand-written knowledge documents before this knowledge can be extracted, structured, and preserved for future guidance. This thesis proposes a framework where large language models are integrated into knowledge transfer and decision-making guidance for an engineering enterprise. Using this framework, this thesis further explores how data-driven knowledge tools can assist junior design engineers by supporting information retrieval and directing to knowledge sources. The ability of a large language model to retrieve relevant knowledge from an engineering design document was validated by comparing the process of human designers manually completing a similar task. In this evaluation involving six participants and the large language model, responses to questions on mechanical dimensioning of stays for utility poles were ranked by experts. The results showed that the large language model responses were ranked similarly to the junior designers on average. Additionally, a small-scale demonstrative knowledge tool, insights from interviews, literature studies as well as the results from the validation study lead to the conclusion that large language models can assist power line designers via a knowledge tool. Beyond power line design, this thesis contributes to the understanding of how data-driven language models can assist knowledge retrieval and decision-making across other engineering design domains. This work utilizes a professional education document on the mechanical dimensioning of wooden power line poles including an analysis on the wind and weight span’s affect on the dimension of the pole, developed parallel to this work. The original design data from the document supported the tests conducted in this thesis. The professional education document on the mechanical dimensioning of wooden power line poles was developed in parallel to this thesis as a case study supporting the tests with original design data on power line design knowledge. The work also discusses risks and ethical aspects when implementing such a knowledge tool. Risks such as leakage of classified information are emphasized and need comprehensive systems and methods to be avoided. It is therefore highlighted how important it is to carry out the project with care and expertise to avoid damage to companies and society. Local language models or highly trusted AI system providers are recommended to ensure that no sensitive information is leaked to an unwanted third-party. With a high degree of caution and consideration of risks, an effective knowledge tool can contribute to increased efficiency, faster and more sustainable development of power line infrastructure, and thus an faster energy transition. / Förlusten av senior expertis inom kraftledningskonstruktion utgör en kritisk utmaning för den hållbara energiomställningen. Nuvarande metoder för kunskapsöverföring är otillräcklig för att förhindra förlusten av ovärderlig kunskap som är nödvändig för framtida juniora kraftledningsprojektörer. Dessutom kan den ökade informella användingen av generativa språkmodeller hota att begrava mänskligt skrivna kunskapsdokument. Detta arbete presenterar ett ramverk d¨ar storskaliga språkmodeller används för att underlätta kunskapsöverföring och tillhandahålla vägledning vid beslutsfattande inom ingenjörsföretag. Med hjälp av detta ramverk utforskar arbetet ytterligare hur datadrivna kunskapsverktyg kan hjälpa juniora kraftledningskonstrukt¨orer genom att stödja informationsinhämtning med hänvisning till kunskapskällorna. En storskalig språkmodells förmåga att hämta relevant kunskap från ett tekniskt designdokument validerades genom att jämföra processen för mänskliga designers som manuellt slutförde en liknande uppgift. I denna utv¨ardering, som involverade sex deltagare och den storskaliga spr˚akmodellen, rankades svaren på frågor om mekanisk dimensionering av stag för kraftledningsstolpar av experter. Resultaten visade att den storskaliga språkmodellens svar i genomsnitt rankades på liknade nivå som de juniora ingenjörerna. Tillsammans med ett småskaligt demonstrativt kunskapsverktyg, insikter från intervjuer med kraftledningskonstruktörer, litteraturstudier samt resultat från valideringsstudien dras slutsatsen att storskaliga språkmodeller kan stödja kraftledningskonstruktörer via ett kunskapsverktyg. Utöver kraftledningskonstruktion bidrar detta arbete till förståelsen av hur datadrivna språkmodeller kan hjälpa till med kunskapsinhämtning och beslutsfattande inom andra tekniska designområden. Arbetet använder ett professionellt utbildningsunderlag om mekanisk dimensionering av kraftledningsstolpar i träkonstruktion, inklusive en analys av vertikala- och horistontella linspannets påverkan på stolpens dimension, utvecklat parallellt med detta arbete. Orginaldesigndata från underlaget stödde de tester som genomfördes. Arbetet belyser även risker och etiska aspekter vid implementering av ett sådant kunskapsverktyg. Risker som läckage av sekretessbelagd information betonas, och omfattande system och metoder behövs för att undvika dem. Därför understryks hur viktigt det är att genomföra liknande projekt med noggrannhet, försiktighet och expertis för att undvika skador på företag och samhälle. Lokala språkmodeller eller API-leverantörer med högt förtroende rekommenderas för att minimera risken att känslig information läcker ut till en oönskad tredje part. Med stor försiktighet och hänsyn till riskerna kan ett effektivt kunskapsverktyg bidra till ökad effektivitet, snabbare och mer hållbar utveckling av kraftledningsinfrastruktur, och därmed en snabbare energiomställning.
|
37 |
Generativ AI i gymnasieskolan : Undersökning av en lektionsseries påverkan på gymnasieelevernas färdigheter / Generative AI in Upper Secondary School : Investigating the impact of a lesson series on upper secondary students' skillsPiorkowski, Bartosz Michal January 2024 (has links)
Denna kvasiexperimentella studie syftade till att undersöka hur en lektionsserie kan struktureras och implementeras med mål att utveckla gymnasieelevers förmåga att använda sig av generativ artificiell intelligens som ett pedagogiskt verktyg. För att möta detta syfte genomfördes tre lektioner om artificiell intelligens, maskininlärning, neurala nätverk och stora språkmodeller med fokus på utveckling av teknisk kunskap och praktiska färdigheter med inslag av etik och kritik. Valet av dessa teman grundades i ett tidigare etablerat ramverk för undervisning inom AIläskunnighet. Vidare teman tas dessa teman upp som del av teknikprogrammet och den kommande AI-kursen enligt Skolverkets förslag. Lektionsseriens påverkan kvantifierades med hjälp av två enkäter – en innan och en efter genomförandet av lektionsserien. Lektionsserien presenterades för två gymnasieklasser vilka bestod av totalt ungefär 50 elever. Urvalet av gymnasieklasserna grundades i deras anslutning till den uppdragsgivande läraren. Vidare valdes respondenterna till enkäten utifrån de elever som fysiskt deltog på den första och sista lektionen och frivilligt valde att svara på enkäten. Dessutom intervjuades fyra tekniklärare för att bättre anpassa lektionsinnehållet till målgruppen. Analysen av svarsfrekvensen till enkätfrågorna visade att lektionsserien hade en statistiskt signifikant påverkan på elevernas tekniska kunskaper, men dess påverkan på elevernas praktiska färdigheter var i stort statistiskt insignifikant. Samtidigt påvisade frekvensanalysen att gymnasieeleverna i regel överskattade sin förmåga att kritiskt granska datorgenererad text och var i stort omedvetna om relevanta etiska frågeställningar. Explorativa faktoranalysen visade att det existerar åtminstone två typer av elever. En elevgrupp av okänd storlek använder sig av stora språkmodeller för att accelerera sina studier genom att lösa problem de annars inte kunde lösa. I detta fall har artificiell intelligens en multiplicerande effekt på elevernas produktivitet. En annan elevgrupp av okänd storlek har i stället som mål att förbättra sina skolresultat genom att använda sig av stora språkmodeller för att lösa deras problem åt dem. Samtidigt överskattar dessa elever sin förmåga att granska datorgenererad text. I detta fall har artificiell intelligens en dämpande effekt på elevernas lärande. Studiens slutsats är att det i dagsläget finns behov för undervisning av gymnasieelever på teknikprogrammet om artificiell intelligens. Detta utrymme kan i stort uppfyllas av en tre lektioner lång lektionsserie. Dock erkänner studien att det finns ytterligare utrymme för praktiska moment där läraren handleder eleverna i deras användning av verktyg såsom ChatGPT. Vidare finns det utrymme för kontinuerligt arbete med kritik och etik, möjligtvis som del av de tidigare nämnda praktiska momenten. / This quais-experimental study aimed to investigate how a series of lessons could be structured and implemented with the goal of developing secondary level students’ ability to use generative artificial intelligence as an educational tool. To meet this goal three lessons on artificial intelligence, machine learning, neural networks, and large language models were conducted, focusing on the development of technical knowledge and practical skills with the inclusion of ethics and critical thinking. The choice of these topics was based on a previously established framework for AI-literacy education. Further, these topics are brought up as part of the Swedish upper secondary school technology programme as well as the upcoming AI-course as per the proposal made by the Swedish Agency for Education. The impact of the lesson series was quantified using two form surveys – one before and one after the implementation of the lesson series. The lesson series was presented to two student classes totalling roughly 50 students. The selection of student classes were based on their affiliation with the assigning teacher. Further, the survey respondents were sampled from the students who physically attended the first and last lesson and voluntarily elected to respond. Additionally, four technology teachers were interviewed to better adapt the teaching material to the student demographic. Response analysis showed that the lesson series had a statistically significant impact on students’ technical knowledge, but its impact on students’ practical skills was largely statistically insignificant. At the same time, the frequency analysis indicated that students generally overestimated their ability to critically evaluate computer-generated text and were largely unaware of relevant ethical issues. Exploratory factor analysis had shown that there exist at least two types of students. A student group of unknown size use large language models to accelerate their studies through solving problems they could not otherwise solve. In this case, artificial intelligence has a multiplying effect on the students’ productivity. Another group of students of unknown size instead use large language models to solve their problems for them with the goal of improving their academic performance. At the same time, these students overestimate their ability to evaluate computer-generated text critically. In this case, artificial intelligence has a dampening effect on the students’ learning. The study concludes that there is a need for teaching secondary level students from the technology programme about artificial intelligence. This space can largely be fulfilled by a series of three lessons. However, the study acknowledges that there remains room for practical activities where the teacher guides students in their use of tools such as ChatGPT. Furthermore, there is room for ongoing work on critical thinking and ethics, possibly as part of the aforementioned practical activities.
|
38 |
Development of a Semantic Search Tool for Swedish Legal Judgements Based on Fine-Tuning Large Language ModelsMikkelsen Toth, Sebastian January 2024 (has links)
Large language models (LLMs) are very large deep learning models which are retrained on a huge amount of data. Among the LLMs are sentence bidirectional encoder representations from transformers (SBERT) where advanced training methods such as transformer-based denoising autoEncoder (TSDAE), generative query network (GenQ) and an adaption of generative pseudo labelling (GPL) can be applied. This thesis project aims to develop a semantic search tool for Swedish legal judgments in order to overcome the limitations of traditional keyword searches in legal document retrieval. For this aim, a model adept at understanding the semantic nuances of legal language has been developed by leveraging natural language processing (NLP) and fine- tuning LLMs like SBERT, using advanced training methods such as TSDAE, GenQ, and an adaption of GPL. To generate labeled data out of unlabelled data, a GPT3.5 model was used after it was fine-tuned. The generation of labeled data with the use of a generative model was crucial for this project to train the SBERT efficiently. The search tool has been evaluated. The evaluation demonstrates that the search tool can accurately retrieve relevant documents based on semantic queries and simnifically improve the efficiency and accuracy of legal research. GenQ has been shown to be the most efficient training method for this use case.
|
39 |
Developing Intelligent Chatbots at Scania : Integrating Technological Solutions and Data Protection ConsiderationsSöderberg, Johan January 2024 (has links)
his thesis researches the complex intersection of Data Protection and Intelligent Chatbots (IC)at Scania Group. Developing intelligent chatbots in a secure and GDPR compliant way is highlycomplicated and multifaceted task. The purpose of this research is to provide Scania withorganizational knowledge on how this can be achieved. This study utilizes the Action DesignResearch framework to develop an artifact which integrates technological solutions with dataprotection considerations. By conducting a literature review and semi-structured interviews withemployees at Scania, three potential solutions are identified evaluated: ChatGPT Enterprise, theSecured AI Knowledge Repository (SAIKR), and Techtalker. Each solution offers differentcapabilities and compliance strategies: ChatGPT Enterprise, while practical, relies on contractualassurances for GDPR compliance with data stored in the USA. SAIKR, on the other hand, offersmore control with data stored and encrypted in Sweden, allowing for the use of advancedprivacy-preserving techniques. Techtalker, which is hosted directly by Scania, provides enhancedsecurity measures tailored to specific technical use cases. Based on the artifact and conclusionsof this research, generalized design principles for developing intelligent chatbots within acorporate structure are formulated. These four design principles encourages the utilization ofRAG and LLMs, safe and legal data localization, strong contractual safeguards with third-partyproviders, and a comprehensive risk analysis with stringent security measures.
|
40 |
Narrative Engineering: Tools, Computational Structure, and Impact of StoriesDeBuse, Michael A. 23 December 2024 (has links) (PDF)
Computational Linguistics has a long history of applying mathematics to the grammatical and syntactic structure of language; however, applying math to the more complex aspects of language, such as narrative, plot, scenes, character relations, causation, etc. remains a difficult topic. The goal of my research is to bridge the narrative humanities with mathematics, to computationally grasp at these difficult topic, and help develop the field of Narrative Engineering. I view narrative and story with the same mathematical scrutiny as other engineering fields, to take the creativity and fluidity of story and encode it in mathematical representations that have meaning beyond probability and statistical predictions that are the primary function of modern large language models. Included in this research is how stories and narratives are structured, evolve, and change, implying that there exists an inherent narrative computation that we as humans do to merge and combine ideas into new and novel ones. Our thoughts and knowledge and opinions determine the stories we tell, as a combination of everything we have seen, read, heard, and otherwise experienced. Narratives have the ability to inform and change those thoughts and opinions, which then lead to the creation of new and novel narratives. In essence, stories can be seen as a programming language for people. My dissertation, then, is to better understand stories and the environments in which stories are shared. I do this through developing tools that detect, extract, and model aspects of stories and their environments; developing mathematical models of stories and their spread environments; and investigating the impact and effects on stories and their spread environments. I then finish with a discussion on the ethical concerns of research in narrative influence and opinion control.
|
Page generated in 0.2241 seconds