1 |
Training reinforcement learning model with custom OpenAI gym for IIoT scenarioNorman, Pontus January 2022 (has links)
Denna studie består av ett experiment för att se, som ett test, hur bra det skulle fungera att implementera en industriell gymmiljö för att träna en reinforcement learning modell. För att fastställa det här tränas modellen upprepade gånger och modellen testas. Om modellen lyckas lösa scenariot, som är en representation av miljön, räknas den träningsiterationen som en framgång. Tiden det tar att träna för ett visst antal spelavsnitt mäts. Antalet avsnitt det tar för reinforcement learning modellen att uppnå ett acceptabelt resultat på 80 % av maximal poäng mäts och tiden det tar att träna dessa avsnitt mäts. Dessa mätningar utvärderas och slutsatser dras om hur väl reinforcement learning modellerna fungerade. Verktygen som används är Q-learning algoritmen implementerad på egen hand och djup Q-learning med TensorFlow. Slutsatsen visade att den manuellt implementerade Q-learning algoritmen visade varierande resultat beroende på miljödesign och hur länge modellen tränades. Det gav både hög och låg framgångsfrekvens varierande från 100 % till 0 %. Och tiderna det tog att träna agenten till en acceptabel nivå var 0,116, 0,571 och 3,502 sekunder beroende på vilken miljö som testades (se resultatkapitlet för mer information om hur modellerna ser ut). TensorFlow-implementeringen gav antingen 100 % eller 0 % framgång och eftersom jag tror att de polariserande resultaten berodde på något problem med implementeringen så valde jag att inte göra fler mätningar än för en miljö. Och eftersom modellen aldrig nådde ett stabilt utfall på mer än 80 % mättes ingen tid på länge den behöver tränas för denna implementering. / This study consists of an experiment to see, as a proof of concept, how well it would work to implement an industrial gym environment to train a reinforcement learning model. To determine this, the reinforcement learning model is trained repeatedly and tested. If the model completes the training scenario, then that training iteration counts as a success. The time it takes to train for certain amount of game episodes is measured. The number of episodes it takes for the reinforcement learning model to achieve an acceptable outcome of 80% of maximum score is measured and the time it takes to train those episodes are measured. These measurements are evaluated, and conclusions are drawn on how well the reinforcement learning models worked. The tools used is the Q-learning algorithm implemented on its own and deep Q-learning with TensorFlow. The conclusion showed that the manually implemented Q-learning algorithm showed varying results depending on environment design and how long the agent is trained. It gave both high and low success rate varying from 100% to 0%. And the times it took to train the agent to an acceptable level was 0.116, 0.571 and 3.502 seconds depending on what environment was tested (see the result chapter for more information on the environments). The TensorFlow implementation gave either 100% or 0% success rate and since I believe the polarizing results was because of some issue with the implementation I chose to not do more measurements than for one environment. And since the model never reached a stable outcome of more than 80% no time for long it needs to train was measured for this implementation.
|
2 |
Chattbotar inom mjukvaruutvecklingFriström, Alex, Wallén, Daniel January 2023 (has links)
This work examines the utilization of chatbots in programming and their effects ondeveloper productivity, code quality, and problem-solving. The surge in AI technologyand the popularity of chatbots has been remarkable since the end of 2022, whenOpenAI introduced ChatGPT, capable of providing rapid and accurate responses toinquiries. This introduces novel opportunities for information accessibility withouthuman interactions.Previous research within this domain has explored the usability of earlier chatbots indesign-related professions, revealing a certain degree of utility. Now, with the advancementof AI, new prospects arise for investigating their utility. Emerging technologiesoften imbue functionalities that facilitate or simplify specific tasks. Therefore,the aim of this study is to explore and analyze how chatbots such as ChatGPTand GitHub Copilot can function as interactive aids to streamline programming andsystems development.Conducted as a qualitative study within the realms of programming and systems development,this work employs interviews as its primary methodology. Semi-structuredqualitative interviews are employed for data collection. To analyze the informationgathered from these interviews, a thematic analysis approach is adopted, facilitatingthe identification of commonalities and disparities in the responses.The findings of this study demonstrate that AI tools have proven to be effective andbeneficial in areas like information retrieval or fundamental programming tasks, yetexhibit limitations in advanced programming endeavors and complex problem-solving.The study encompasses respondents who have employed these tools in theirwork, possessing the expertise and experience to offer insights into developers' utilizationof these tools in software development.
|
3 |
RETRIEVAL-AUGMENTEDGENERATION WITH AZURE OPEN AIAndersson, Henrik January 2024 (has links)
This thesis investigates the implementation of an Retrieval-Augmented Generation (RAG) Teamschat bot to enhance the efficiency of a service organization, utilizing Microsoft Azure’s AI services.The project combines the retrieval capabilities of Azure AI Search with OpenAI’s GPT-3.5 Turboand Meta’s Llama 3 70B-instruct. The aim is to develop a chat bot capable of handling bothstructured and unstructured data. The motivation for this work comes from the limitations ofstandalone Large Language Models (LLMs) which often fail to provide accurate and contextuallyrelevant answers without external knowledge. The project uses the retriever and two languagemodels and evaluates them using F1 scoring. The retriever performs well, but the RAG modelproduces wrong or too long answers. Metrics other than F1 scoring could be used, and future workin prompt engineering as well as larger test datasets could improve model performance.
|
4 |
Implementing an OpenAI Gym for Machine Learning of Microgrid Electricity TradingLundholm, André January 2021 (has links)
Samhället går idag bort från centraliserad energi mot decentraliserade system. Istället för att köpa från stora företag som skapar el från fossila bränslen har många förnybara alternativ kommit. Eftersom konsumenter kan generera solenergi med solpaneler kan de också bli producenter. Detta skapar en stor marknad för handel av el mellan konsumenter i stället för företag. Detta skapar ett så kallat mikronät. Syftet med denna avhandling är att hitta en lösning för att köpa och sälja på dessa mikronät. Genom att använda en Q-learning-lösning med OpenAI Gym-verktygslådan och en mikronätsimulering syftar denna avhandling till att svara på följande frågor: I vilken utsträckning kan Qlearning användas för att köpa och sälja energi i ett mikrosystem, hur lång tid tar det köp och sälj algoritm för att träna och slutligen påverkar latens genomförbarheten av Q-learning för mikronät. För att svara på dessa frågor måste jag mäta latens och utbildningstid för Q-learninglösningen. En neural nätverkslösning skapades också för att jämföra med Q-learning-lösningen. Från dessa resultat kunde jag säga att en del av det inte var så tillförlitligt, men vissa slutsatser kunde fortfarande göras. För det första är den utsträckning som Q-learning kan användas för att köpa och sälja ganska bra om man bara tittar på noggrannhetsresultaten på 97%, men detta sitter på mikronätets simulering för att vara korrekt. Hur lång tid det tar att köpa och sälja algoritm för att träna uppmättes till cirka 12 sekunder. Latensen anses vara noll med Q-learning-lösningen, så den har stor genomförbarhet. Genom dessa frågor kan jag dra slutsatsen att en Q-learning OpenAI Gym-lösning är genomförbart. / Society is today moving away from centralized power towards decentralized systems. Instead of buying from large companies that create electricity from fossil fuels, many renewable alternatives have arrived. Since consumers can generate solar power with solar panels, they can also become the producers. This creates a large market for trading electricity between consumer instead of companies. This creates a so called microgrid. The purpose of this thesis is to find a solution to buying and selling on these microgrids. By using a Q-learning solution with the OpenAI Gym toolkit and a microgrid simulation this thesis aims to answer the following questions: To what extent can Q-learning be used to buy and sell energy in a microgrid system, how long does it take the buy and sell algorithm to train and finally does latency affect the feasibility of Q-learning for microgrids. To answer these questions, I must measure the latency and training time of the Q-learning solution. A neural network solution was also created to compare to the Q-learning solution. From these results I could tell some of it was not that reliable, but some conclusions could still be made. First, the extent that Q-learning can be used to buy and sell is quite great if just looking at the accuracy results of 97%, but this is on the microgrid simulation to be correct. How long it takes to buy and sell algorithm to train was measured to about 12 seconds. The latency is considered zero with the Q-learning solution, so it has great feasibility. Through these questions I can conclude that a Qlearning OpenAI Gym solution is a viable one.
|
5 |
Sammanvävning av AI-teknologi för att effektivisera möten och dokumentation : En Design Science-approachGunnarsson, Jonathan January 2024 (has links)
This study presents an in-depth evaluation of a website integrating OpenAI Whisper and ChatGPT-4 for automatic transcription and extraction of meeting dialogues, using design science as the research strategy. The introduction highlights the need for such a system and its potential application areas. The theoretical background elucidates key concepts and technologies in artificial intelligence. Insights into existing methods and their strengths and weaknesses are gained through a review of previous research and similar systems. The methodology section illuminates how the design science strategy was applied to define requirements, develop, and evaluate the system. The process is described in detail and includes steps such as problem identification, survey for data collection, artifact development, demonstration, and evaluation. The results of the user evaluation highlight both positive and negative aspects of the system. User feedback was used to identify areas for improvement and suggest paths for future development. In conclusion, despite some limitations, the system has the potential to be a useful resource in various application areas, and design science proves to be an effective method for the development and evaluation of such systems. This report has contributed to an increased understanding of the design process behind AI-based systems and their utility in practical applications, where strengths and weaknesses have been identified through the application of design science, leading to suggestions for future development.
|
6 |
ChatGPT: A gateway to AI generated unit testing / ChatGPT: En ingångspunkt till AI genererade enhetstesterFiallos Karlsson, Daniel, Abraham, Philip January 2023 (has links)
This paper studies how the newly released AI ChatGPT can be used to reduce the time and effort software developers spend on writing unit tests, more specifically if ChatGPT can generate quality unit tests. Another aspect of the study is how the prompting of ChatGPT can be optimized for generating unit tests, by creating a prompt framework. Lastly how the generated unit tests of ChatGPT compare to human written tests was tested. This was done by conducting an experiment where ChatGPT was prompted to generate unit tests for predefined code written in C# or Typescript which was then evaluated and rated. After the generated unit test had been rated, the next steps were determined, and the process was repeated. The results were logged following a diary study. The rating system was constructed with the help of previous research and interviews with software developers working in the industry which defined what a high-quality unit test should include. The interviews also helped in understanding ChatGPT’s perceived capabilities. The experiment showed that ChatGPT can generate unit tests that are of quality, though with certain issues. For example, reusing the same prompt multiple times revealed that the consistency in the responses was lacking. Inconsistencies included different testing approaches (how setup methods were used for example), testing areas and sometimes quality. The inconsistencies were reduced by using the deduced prompt framework, but the issue could be a current limitation of ChatGPT which could be handled with a future release.
|
7 |
ChatGPT: A Good Computer Engineering Student? : An Experiment on its Ability to Answer Programming Questions from ExamsLoubier, Michael January 2023 (has links)
The release of ChatGPT has really set new standards for what an artificial intelligence chatbot should be. It has even shown its potential in answering university-level exam questions from different subjects. This research is focused on evaluating its capabilities in programming subjects. To achieve this, coding questions taken from software engineering exams were posed to the AI (N = 23) through an experiment. Then, statistical analysis was done to find out how good of a student ChatGPT is by analyzing its answer’s correctness, degree of completion, diversity of response, speed of response, extraneity, number of errors, length of response and confidence levels. GPT-3.5 is the version analyzed. The experiment was done using questions from three different programming subjects. Afterwards, results showed a 93% rate of correct answer generation, demonstrating its competence. However, it was found that the AI occasionally produces unnecessary lines of code that were not asked for and thus treated as extraneity. The confidence levels given by ChatGPT, which were always high, also didn't always align with response quality which showed the subjectiveness of the AI’s self-assessment. Answer diversity was also a concern, where most answers were repeatedly written nearly the same way. Moreover, when there was diversity in the answers, it also caused much more extraneous code. If ChatGPT was to be blind tested for a software engineering exam containing a good number of coding questions, unnecessary lines of code and comments could be what gives it away as being an AI. Nonetheless, ChatGPT was found to have great potential as a learning tool. It can offer explanations, debugging help, and coding guidance just as any other tool or person could. It is not perfect though, so it should be used with caution.
|
8 |
Utveckling av AI-verktyg för textgenerering: Ingresser och produktbeskrivningar / Development of an AI Tool for Text Generation: Intros and Product DescriptionsFalkman, Hugo, Sturesson, William January 2024 (has links)
Detta arbete syftar till att utvärdera potentialen hos en GPT-modell för att effektivisera redaktörers arbete med att generera textinnehåll för olika produkter. Den huvudsakliga frågeställningen är: ”Går det att integrera en GPT-modell i redigeringsplattformen TinyMCE för att effektivisera redaktörers arbete med att generera textinnehåll för olika produkter inom ett e-handelsföretag?” Arbetet fokuserar på att underlätta redigeringsarbetet för redaktörer genom att erbjuda en integrerad lösning för textgenerering, vilket förväntas öka produktiviteten och kvaliteten på de genererade texterna. Arbetet resulterade i utvecklingen av ett AI-verktyg som integrerats med redigeringsplattformen TinyMCE, där GPT-modellen fungerar som motor för textgenereringen. Resultatet visar att det utvecklade verktyget effektivt kan producera textinnehåll med god kvalitet och relevans. Genom att erbjuda en användarvänlig och integrerad lösning för redaktörer förväntas verktyget bidra till ökad produktivitet och effektivitet i redigeringsprocessen. Eftersom GPT-modellen tenderar att utforma generaliseringar och dra egna slutsatser när den tillhandahållna informationen är otydlig, bör det noteras att verktyget inte är autonomt. Det är av yttersta vikt att redaktörerna noggrant granskar resultatet för att säkerställa att det återgivna textinnehållet är korrekt. / This research aims to evaluate the potential of a GPT-model to streamline editors work in generating textual content for various products. The main research question is: ”Is it possible to integrate a GPT model into the TinyMCE editing platform to streamline editors work in generating text content for various products within an e-commerce company?” The focus is on facilitating the editing process for editors by providing an integrated solution for text generation, which is expected to increase productivity and the quality of the generated texts. The work resulted in the development of an AI tool integrated with the TinyMCE editing platform, where the GPT-model serves as the engine for text generation. The findings demonstrate that the developed tool can effectively produce satisfactory quality and relevant textual content. By offering a userfriendly and integrated solution for editors, the tool is expected to contribute to increased productivity and efficiency in the editing process. However, it should be noted that since the GPT-model tends to generalize and draw its own conclusions when the provided information is insufficiently clear, the tool is not autonomous. It is crucial for editors to carefully review the output to ensure the accuracy and truthfulness of the rendered textual content.
|
9 |
ChatGPT - Möjligheter och utmaningar : En studie om ChatGPT:s påverkan inom skolans värld / ChatGPT: Opportunities and challenges : A study on the impact of ChatGPT in the world of educationYousif Harut, Eva January 2024 (has links)
This study examines both the opportunities and challenges of including ChatGPT in education from the perspective of middle and high school teachers. The main objective is to highlight not only the challenges posed by AI but also the potential benefits of AI in education, as negative opinions about ChatGPT are more commonly heard. The study applies the epistemological perspectives of sociocultural and pragmatism to gain a deeper understanding of teachers’ views on the opportunities and challenges that ChatGPT can bring to education. The results show that ChatGPT offers several opportunities as well as challenges, which are explained in more detail in the study. The most interesting finding is that critical thinking emerges as a common factor in the teachers’ opinions, the ability to critically evaluate information has become more important with the introduction of ChatGPT. Teachers believe that ChatGPT can serve as a valuable complement in education, provided that students learn to use the tool in a critical and responsible manner. This can enhance students’ critical thinking and analytical skills, which are crucial for their development.
|
10 |
Investigating the Effects of Nudges for Facilitating the Use of Trigger Warnings and Content WarningsAltland, Emily Caroline 27 June 2024 (has links)
Social media can trigger past traumatic memories in viewers when posters post sensitive content. Strict content moderation and blocking/reporting features do not work when triggers are nuanced and the posts may not violate site guidelines. Viewer-side interventions exist to help filter and hide certain content but these put all the responsibility on the viewer and typically act as 'aftermath interventions'. Trigger and content warnings offer a unique solution giving viewers the agency to scroll past content they may want to avoid. However, there is a lack of education and awareness for posters for how to add a warning and what topics may require one. We conducted this study to determine if poster-side interventions such as a nudge algorithm to add warnings to sensitive posts would increase social media users' knowledge and understanding of how and when to add trigger and content warnings. To investigate the effectiveness of a nudge algorithm, we designed the TWIST (Trigger Warning Includer for Sensitive Topics) app. The TWIST app scans tweet content to determine whether a TW/CW is needed and if so, nudges the social media poster to add one with an example of what it may look like. We then conducted a 4-part mixed methods study with 88 participants. Our key findings from this study include (1) Nudging social media users to add TW/CW educates them on triggering topics and raises their awareness when posting in the future, (2) Social media users can learn how to add a trigger/content warning through using a nudge app, (3) Researchers grew in understanding of how a nudge algorithm like TWIST can change people's behavior and perceptions, and (4) We provide empirical evidence of the effectiveness of such interventions (even in short-time use). / Master of Science / Social media can trigger past traumatic memories in viewers when posters post sensitive content. Strict content moderation and blocking/reporting features do not work when triggers are nuanced and the posts may not violate site guidelines. Viewer-side interventions exist to help filter and hide certain content but these put all the responsibility on the viewer and typically act as 'aftermath interventions'. Trigger and content warnings offer a unique solution giving viewers the agency to scroll past content they may want to avoid. However, there is a lack of education and awareness for posters for how to add a warning and what topics may require one. We conducted this study to determine if poster-side interventions such as a nudge algorithm to add warnings to sensitive posts would increase social media users' knowledge and understanding of how and when to add trigger and content warnings. To investigate the effectiveness of a nudge algorithm, we designed the TWIST (Trigger Warning Includer for Sensitive Topics) app then conducted a 4-part mixed methods study with 88 participants. Our findings from this study show that nudging social media users to add TW/CW educates them on triggering topics and raise their awareness when posting in the future. It also shows social media users can learn how to add a trigger/content warning through using a nudge app.
|
Page generated in 0.0619 seconds