• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 6
  • 1
  • Tagged with
  • 67
  • 67
  • 67
  • 32
  • 29
  • 27
  • 22
  • 21
  • 20
  • 20
  • 20
  • 20
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Fine-tuning a LLM using Reinforcement Learning from Human Feedback for a Therapy Chatbot Application / Finjustering av en LLM med hjälp av förstärkande inlärning från mänsklig återkoppling (eng. RLHF) för en Psykolog-chatbot applikation

Bill, Desirée, Eriksson, Theodor January 2023 (has links)
The field of AI and machine learning has seen exponential growth in the last decade and even more so in the recent year with the considerable public interest in Large Language models (LLMs) such as chat-GPT. LLMs can be used for several purposes, but one possible application would be fine-tuning a model to perform a particular function in a specific field. The goal is therefore fine-tuning a LLM in the field of psychology using a new method called Reinforcement Learning from Human Feedback to determine if it is a viable method in such cases. The theory behind LLMs and RLHF as well as the ethical perspective on developing a psychological AI is presented. Previous studies on both RLHF and AI in psychology are presented, showing the goal is feasible. Then the method is explained for both training and evaluating the model which is done by comparing a pre-trained model with the fine-tuned one. The study is considered scientifically relevant as RLHF has been used to fine-tune LLMs earlier, but has not been done with the intent to make it more specified in a field. The result did not show any clear difference between the pre-trained and the fine-tuned model therefore, more tests are required. However, with the limitations regarding hardware, time to train, and available data, there is much improvement needed for future studies. An ethical framework applied to a digital psychology assistant is discussed and a suitable introduction to the market and division of responsibilities is proposed. / Området AI och maskininlärning har sett exponentiell tillväxt under det senaste decenniet och ännu mer under det senaste året med det stora allmänintresset för stora språkmodeller som chat-GPT. Stora språkmodeller kan användas till flera saker där en möjlig tillämpning är att finjustera en modell för att fylla en viss funktion inom ett specifikt yrke. Målet med arbetet är därför att finjustera en språkmodell inom området psykologi med hjälp av en ny metod kallad Reinforcement Learning from Human Feedback för att undersöka metodens tillämplighet. Teorin bakom stora språkmodeller och RLHF samt det etiska perspektivet på att utveckla en digital psykologi assistent förklaras. Därefter presenteras tidigare studier om både RLHF och AI inom psykologi som visar att målet är genomförbart. Metoden för att både träna och utvärdera modellen förklaras som görs genom att jämföra den förtränade modellen med den finjusterade. Studien bedöms som vetenskapligt relevant även fast RLHF har använts för att finjustera språkmodeller tidigare, har det inte gjorts med målet att finjustera en språkmodell till ett visst yrke. Resultatet visade inte på någon tydlig skillnad mellan den förtränade och den finjusterade modellen, därför krävs fler tester krävs. Men med de begräsningar som fanns gällande hårdvara, tid att träna och tillgänglig data är det mycket som kan förbättras i framtida studier. Det etiska ramverket applicerat på en digital psykologi assistent diskuteras och en lämplig introduktion till marknaden och ansvarsfördelning föreslås.
22

An In-Depth study on the Utilization of Large Language Models for Test Case Generation

Johnsson, Nicole January 2024 (has links)
This study investigates the utilization of Large Language Models for Test Case Generation. The study uses the Large Language model and Embedding model provided by Llama, specifically Llama2 of size 7B, to generate test cases given a defined input. The study involves an implementation that uses customization techniques called Retrieval Augmented Generation (RAG) and Prompt Engineering. RAG is a method that in this study, stores organisation information locally, which is used to create test cases. This stored data is used as complementary data apart from the pre-trained data that the large language model has already trained on. By using this method, the implementation can gather specific organisation data and therefore have a greater understanding of the required domains. The objective of the study is to investigate how AI-driven test case generation impacts the overall software quality and development efficiency. This is evaluated by comparing the output of the AI-based system, to manually created test cases, as this is the company standard at the time of the study. The AI-driven test cases are analyzed mainly in the form of coverage and time, meaning that we compare to which degree the AI system can generate test cases compared to the manually created test case. Likewise, time is taken into consideration to understand how the development efficiency is affected. The results reveal that by using Retrieval Augmented Generationin combination with Prompt Engineering, the system is able to identify test cases to a certain degree. The results show that 66.67% of a specific project was identified using the AI, however, minor noise could appear and results might differ depending on the project’s complexity. Overall the results revealed how the system can positively impact the development efficiency and could also be argued to have a positive effect on the software quality. However, it is important to understand that the implementation as its current stage, is not sufficient enough to be used independently, but should rather be used as a tool to more efficiently create test cases.
23

Exploring artificial intelligence bias : a comparative study of societal bias patterns in leading AI-powered chatbots.

Udała, Katarzyna Agnieszka January 2023 (has links)
The development of artificial intelligence (AI) has revolutionised the way we interact with technology and each other, both in society and in professional careers. Although they come with great potential for productivity and automation, AI systems have been found to exhibit biases that reflect and perpetuate existing societal inequalities. With the recent rise of artificial intelligence tools exploiting the large language model (LLM) technology, such as ChatGPT, Bing Chat and Bard AI, this research project aims to investigate the extent of AI bias in said tools and explore its ethical implications. By reviewing and analysing responses to carefully crafted prompts generated by three different AI chatbot tools, the author will intend to determine whether the content generated by these tools indeed exhibits patterns of bias related to various social identities, as well as compare the extent to which such bias is present across all three tools. This study will contribute to the growing body of literature on AI ethics and inform efforts to develop more equitable and inclusive AI systems. By exploring the ethical dimensions of AI bias in selected LLMs, this research will shed light on the broader societal implications of AI and the role of technology in shaping our future.
24

Preserving Knowledge in Power Line Engineering with Language Models and Design

Götling, Axel January 2024 (has links)
The loss of senior expertise in power line design poses a critical challenge to the sustainable energy transition. Current methods of knowledge transfer fail to prevent the loss of invaluable knowledge necessary for future junior power line designers. Additionally, the rise of informal deployment of generative language models may also threaten to bury hand-written knowledge documents before this knowledge can be extracted, structured, and preserved for future guidance. This thesis proposes a framework where large language models are integrated into knowledge transfer and decision-making guidance for an engineering enterprise. Using this framework, this thesis further explores how data-driven knowledge tools can assist junior design engineers by supporting information retrieval and directing to knowledge sources. The ability of a large language model to retrieve relevant knowledge from an engineering design document was validated by comparing the process of human designers manually completing a similar task. In this evaluation involving six participants and the large language model, responses to questions on mechanical dimensioning of stays for utility poles were ranked by experts. The results showed that the large language model responses were ranked similarly to the junior designers on average. Additionally, a small-scale demonstrative knowledge tool, insights from interviews, literature studies as well as the results from the validation study lead to the conclusion that large language models can assist power line designers via a knowledge tool. Beyond power line design, this thesis contributes to the understanding of how data-driven language models can assist knowledge retrieval and decision-making across other engineering design domains. This work utilizes a professional education document on the mechanical dimensioning of wooden power line poles including an analysis on the wind and weight span’s affect on the dimension of the pole, developed parallel to this work. The original design data from the document supported the tests conducted in this thesis. The professional education document on the mechanical dimensioning of wooden power line poles was developed in parallel to this thesis as a case study supporting the tests with original design data on power line design knowledge. The work also discusses risks and ethical aspects when implementing such a knowledge tool. Risks such as leakage of classified information are emphasized and need comprehensive systems and methods to be avoided. It is therefore highlighted how important it is to carry out the project with care and expertise to avoid damage to companies and society. Local language models or highly trusted AI system providers are recommended to ensure that no sensitive information is leaked to an unwanted third-party. With a high degree of caution and consideration of risks, an effective knowledge tool can contribute to increased efficiency, faster and more sustainable development of power line infrastructure, and thus an faster energy transition. / Förlusten av senior expertis inom kraftledningskonstruktion utgör en kritisk utmaning för den hållbara energiomställningen. Nuvarande metoder för kunskapsöverföring är otillräcklig för att förhindra förlusten av ovärderlig kunskap som är nödvändig för framtida juniora kraftledningsprojektörer. Dessutom kan den ökade informella användingen av generativa språkmodeller hota att begrava mänskligt skrivna kunskapsdokument. Detta arbete presenterar ett ramverk d¨ar storskaliga språkmodeller används för att underlätta kunskapsöverföring och tillhandahålla vägledning vid beslutsfattande inom ingenjörsföretag. Med hjälp av detta ramverk utforskar arbetet ytterligare hur datadrivna kunskapsverktyg kan hjälpa juniora kraftledningskonstrukt¨orer genom att stödja informationsinhämtning med hänvisning till kunskapskällorna. En storskalig språkmodells förmåga att hämta relevant kunskap från ett tekniskt designdokument validerades genom att jämföra processen för mänskliga designers som manuellt slutförde en liknande uppgift. I denna utv¨ardering, som involverade sex deltagare och den storskaliga spr˚akmodellen, rankades svaren på frågor om mekanisk dimensionering av stag för kraftledningsstolpar av experter. Resultaten visade att den storskaliga språkmodellens svar i genomsnitt rankades på liknade nivå som de juniora ingenjörerna. Tillsammans med  ett småskaligt demonstrativt kunskapsverktyg, insikter från intervjuer med kraftledningskonstruktörer, litteraturstudier samt resultat från valideringsstudien dras slutsatsen att storskaliga språkmodeller kan stödja kraftledningskonstruktörer via ett kunskapsverktyg. Utöver kraftledningskonstruktion bidrar detta arbete till förståelsen av hur datadrivna språkmodeller kan hjälpa till med kunskapsinhämtning och beslutsfattande  inom andra tekniska designområden. Arbetet använder ett professionellt utbildningsunderlag om mekanisk dimensionering av kraftledningsstolpar i träkonstruktion, inklusive en analys av vertikala- och horistontella linspannets påverkan på stolpens dimension, utvecklat parallellt med detta arbete. Orginaldesigndata från underlaget stödde de tester som genomfördes. Arbetet belyser även risker och etiska aspekter vid implementering av ett sådant kunskapsverktyg. Risker som läckage av sekretessbelagd information betonas, och omfattande system och metoder behövs för att undvika dem. Därför understryks hur viktigt det är att genomföra liknande projekt med noggrannhet, försiktighet och expertis för att undvika skador på företag och samhälle. Lokala språkmodeller eller API-leverantörer med högt förtroende rekommenderas för att minimera risken att känslig information läcker ut till en oönskad tredje part. Med stor försiktighet och hänsyn till riskerna kan ett effektivt kunskapsverktyg bidra till ökad effektivitet, snabbare och mer hållbar utveckling av kraftledningsinfrastruktur, och därmed en snabbare energiomställning.
25

Generativ AI i gymnasieskolan : Undersökning av en lektionsseries påverkan på gymnasieelevernas färdigheter / Generative AI in Upper Secondary School : Investigating the impact of a lesson series on upper secondary students' skills

Piorkowski, Bartosz Michal January 2024 (has links)
Denna kvasiexperimentella studie syftade till att undersöka hur en lektionsserie kan struktureras och implementeras med mål att utveckla gymnasieelevers förmåga att använda sig av generativ artificiell intelligens som ett pedagogiskt verktyg. För att möta detta syfte genomfördes tre lektioner om artificiell intelligens, maskininlärning, neurala nätverk och stora språkmodeller med fokus på utveckling av teknisk kunskap och praktiska färdigheter med inslag av etik och kritik. Valet av dessa teman grundades i ett tidigare etablerat ramverk för undervisning inom AIläskunnighet. Vidare teman tas dessa teman upp som del av teknikprogrammet och den kommande AI-kursen enligt Skolverkets förslag. Lektionsseriens påverkan kvantifierades med hjälp av två enkäter – en innan och en efter genomförandet av lektionsserien. Lektionsserien presenterades för två gymnasieklasser vilka bestod av totalt ungefär 50 elever. Urvalet av gymnasieklasserna grundades i deras anslutning till den uppdragsgivande läraren. Vidare valdes respondenterna till enkäten utifrån de elever som fysiskt deltog på den första och sista lektionen och frivilligt valde att svara på enkäten. Dessutom intervjuades fyra tekniklärare för att bättre anpassa lektionsinnehållet till målgruppen. Analysen av svarsfrekvensen till enkätfrågorna visade att lektionsserien hade en statistiskt signifikant påverkan på elevernas tekniska kunskaper, men dess påverkan på elevernas praktiska färdigheter var i stort statistiskt insignifikant. Samtidigt påvisade frekvensanalysen att gymnasieeleverna i regel överskattade sin förmåga att kritiskt granska datorgenererad text och var i stort omedvetna om relevanta etiska frågeställningar. Explorativa faktoranalysen visade att det existerar åtminstone två typer av elever. En elevgrupp av okänd storlek använder sig av stora språkmodeller för att accelerera sina studier genom att lösa problem de annars inte kunde lösa. I detta fall har artificiell intelligens en multiplicerande effekt på elevernas produktivitet. En annan elevgrupp av okänd storlek har i stället som mål att förbättra sina skolresultat genom att använda sig av stora språkmodeller för att lösa deras problem åt dem. Samtidigt överskattar dessa elever sin förmåga att granska datorgenererad text. I detta fall har artificiell intelligens en dämpande effekt på elevernas lärande. Studiens slutsats är att det i dagsläget finns behov för undervisning av gymnasieelever på teknikprogrammet om artificiell intelligens. Detta utrymme kan i stort uppfyllas av en tre lektioner lång lektionsserie. Dock erkänner studien att det finns ytterligare utrymme för praktiska moment där läraren handleder eleverna i deras användning av verktyg såsom ChatGPT. Vidare finns det utrymme för kontinuerligt arbete med kritik och etik, möjligtvis som del av de tidigare nämnda praktiska momenten. / This quais-experimental study aimed to investigate how a series of lessons could be structured and implemented with the goal of developing secondary level students’ ability to use generative artificial intelligence as an educational tool. To meet this goal three lessons on artificial intelligence, machine learning, neural networks, and large language models were conducted, focusing on the development of technical knowledge and practical skills with the inclusion of ethics and critical thinking. The choice of these topics was based on a previously established framework for AI-literacy education. Further, these topics are brought up as part of the Swedish upper secondary school technology programme as well as the upcoming AI-course as per the proposal made by the Swedish Agency for Education. The impact of the lesson series was quantified using two form surveys – one before and one after the implementation of the lesson series. The lesson series was presented to two student classes totalling roughly 50 students. The selection of student classes were based on their affiliation with the assigning teacher. Further, the survey respondents were sampled from the students who physically attended the first and last lesson and voluntarily elected to respond. Additionally, four technology teachers were interviewed to better adapt the teaching material to the student demographic. Response analysis showed that the lesson series had a statistically significant impact on students’ technical knowledge, but its impact on students’ practical skills was largely statistically insignificant. At the same time, the frequency analysis indicated that students generally overestimated their ability to critically evaluate computer-generated text and were largely unaware of relevant ethical issues. Exploratory factor analysis had shown that there exist at least two types of students. A student group of unknown size use large language models to accelerate their studies through solving problems they could not otherwise solve. In this case, artificial intelligence has a multiplying effect on the students’ productivity. Another group of students of unknown size instead use large language models to solve their problems for them with the goal of improving their academic performance. At the same time, these students overestimate their ability to evaluate computer-generated text critically. In this case, artificial intelligence has a dampening effect on the students’ learning. The study concludes that there is a need for teaching secondary level students from the technology programme about artificial intelligence. This space can largely be fulfilled by a series of three lessons. However, the study acknowledges that there remains room for practical activities where the teacher guides students in their use of tools such as ChatGPT. Furthermore, there is room for ongoing work on critical thinking and ethics, possibly as part of the aforementioned practical activities.
26

Developing Intelligent Chatbots at Scania : Integrating Technological Solutions and Data Protection Considerations

Söderberg, Johan January 2024 (has links)
his thesis researches the complex intersection of Data Protection and Intelligent Chatbots (IC)at Scania Group. Developing intelligent chatbots in a secure and GDPR compliant way is highlycomplicated and multifaceted task. The purpose of this research is to provide Scania withorganizational knowledge on how this can be achieved. This study utilizes the Action DesignResearch framework to develop an artifact which integrates technological solutions with dataprotection considerations. By conducting a literature review and semi-structured interviews withemployees at Scania, three potential solutions are identified evaluated: ChatGPT Enterprise, theSecured AI Knowledge Repository (SAIKR), and Techtalker. Each solution offers differentcapabilities and compliance strategies: ChatGPT Enterprise, while practical, relies on contractualassurances for GDPR compliance with data stored in the USA. SAIKR, on the other hand, offersmore control with data stored and encrypted in Sweden, allowing for the use of advancedprivacy-preserving techniques. Techtalker, which is hosted directly by Scania, provides enhancedsecurity measures tailored to specific technical use cases. Based on the artifact and conclusionsof this research, generalized design principles for developing intelligent chatbots within acorporate structure are formulated. These four design principles encourages the utilization ofRAG and LLMs, safe and legal data localization, strong contractual safeguards with third-partyproviders, and a comprehensive risk analysis with stringent security measures.
27

Development of a Semantic Search Tool for Swedish Legal Judgements Based on Fine-Tuning Large Language Models

Mikkelsen Toth, Sebastian January 2024 (has links)
Large language models (LLMs) are very large deep learning models which are retrained on a huge amount of data. Among the LLMs are sentence bidirectional encoder representations from transformers (SBERT) where advanced training methods such as transformer-based denoising autoEncoder (TSDAE), generative query network (GenQ) and an adaption of generative pseudo labelling (GPL) can be applied. This thesis project aims to develop a semantic search tool for Swedish legal judgments in order to overcome the limitations of traditional keyword searches in legal document retrieval. For this aim, a model adept at understanding the semantic nuances of legal language has been developed by leveraging natural language processing (NLP) and fine- tuning LLMs like SBERT, using advanced training methods such as TSDAE, GenQ, and an adaption of GPL. To generate labeled data out of unlabelled data, a GPT3.5 model was used after it was fine-tuned. The generation of labeled data with the use of a generative model was crucial for this project to train the SBERT efficiently. The search tool has been evaluated. The evaluation demonstrates that the search tool can accurately retrieve relevant documents based on semantic queries and simnifically improve the efficiency and accuracy of legal research. GenQ has been shown to be the most efficient training method for this use case.
28

Generating Terraform Configuration Files with Large Language Models / Att skapa Terraform-konfigurationsfiler med stora språkmodeller

Bonde, Oskar January 2022 (has links)
This thesis explores how large language models can be used to generate configuration files for Terraform from natural language descriptions. Few-shot and fine-tuning paradigms are evaluated on decoder-only models of varying size, including the state-of-the-art Codex model. The generated configuration files are evaluated with regard to functional correctness on a custom dataset using Terraform, to account for the large space of functionally equivalent configuration files. Results show that the largest model Codex is very capable at generating configuration files given an English description of network infrastructure even without fine-tuning. The result could be a useful tool for engineers who know Terraform fundamentals and have experience with the cloud platforms: AWS, GCP, or Azure. A future study could fine-tune Codex for Terraform using OpenAI's API or create an open source Codex-replication by fine-tuning the GPT-3 replication OPT, which in turn can be \hbox{fine-tuned}. / Denna avhandling undersöker hur stora språkmodeller kan användas till att generera konfigurationsfiler för Terraform med hjälp av språkbeskrivningar. Både few-shot och fine-tuning paradigm utvärderas på decoder-only modeller i olika storlekar, inklusive Codex. För att ta hänsyn till konfigurationsfiler som i utseende ser olika ut men som är funktionellt ekvivalenta utvärderas konfigurationsfilerna utifrån deras funktion. Resultaten visar att Codex, som är den största modellen, har förmågan att generera konfigurationsfiler givet en engelsk beskrivning av nätverksinfrastruktur, trots att Codex inte har undergått fine-tuning. Resultatet kan vara ett användbart verktyg för ingenjörer som har grundläggande kunskap om Terraform och erfarenhet av molnplattformarna: AWS, GCP eller Azure. En framtida studie skulle kunna träna Codex för Terraform med OpenAI:s API eller skapa en Codex-kopia genom att träna GPT-3 kopian OPT som i sin tur kan bli tränad för Terraform.
29

Language Models as Evaluators : A Novel Framework for Automatic Evaluation of News Article Summaries / Språkmodeller som Utvärderare : Ett Nytt Ramverk för Automatiserad Utvärdering av Nyhetssammanfattningar

Helgesson Hallström, Celine January 2023 (has links)
The advancements in abstractive summarization using Large Language Models (LLMs) have brought with it new challenges in evaluating the quality and faithfulness of generated summaries. This thesis explores a human-like automated method for evaluating news article summaries. By leveraging two LLMs with instruction-following capabilities (GPT-4 and Claude), the aim is to examine to what extent the quality of summaries can be measured by predictions of an LLM. The proposed framework involves defining specific attributes of desired summaries, which are used to design generation prompts and evaluation questions. These questions are presented to the LLMs in natural language during evaluation to assess of various summary qualities. To validate the effectiveness of the evaluation method, an adversarial approach is employed, in which a dataset comprising summaries with distortions related to various summary attributes is generated. In an experiment, the two LLMs evaluate the adversarial dataset, and their ability to detect known distortions is measured and analyzed. The findings suggest that the LLM-based evaluations demonstrate promise in detecting binary qualitative issues, such as incorrect facts. However, the reliability of the zero-shot evaluation varies depending on the evaluating LLM and the specific questions used. Further research is required to validate the accuracy and generalizability of the results, particularly in subjective dimensions where the results of this thesis are inconclusive. Nonetheless, this thesis provides insights that can serve as a foundation for future advancements in the field of automatic text evaluation. / De framsteg som gjorts inom abstrakt sammanfattning med hjälp av stora språkmodeller (LLM) har medfört nya utmaningar när det gäller att utvärdera kvaliteten och sanningshalten hos genererade sammanfattningar. Detta examensarbete utforskar en mänskligt inspirerad automatiserad metod för att utvärdera sammanfattningar av nyhetsartiklar. Genom att dra nytta av två LLM:er med instruktionsföljande förmågor (GPT-4 och Claude) är målet att undersöka i vilken utsträckning kvaliteten av sammanfattningar kan bestämmas med hjälp av språkmodeller som utvärderare. Det föreslagna ramverket innefattar att definiera specifika egenskaper hos önskade sammanfattningar, vilka används för att utforma genereringsuppmaningar (prompts) och utvärderingsfrågor. Dessa frågor presenteras för språkmodellerna i naturligt språk under utvärderingen för att bedöma olika kvaliteter hos sammanfattningar. För att validera utvärderingsmetoden används ett kontradiktoriskt tillvägagångssätt där ett dataset som innefattar sammanfattningar med förvrängningar relaterade till olika sammanfattningsattribut genereras. I ett experiment utvärderar de två språkmodellerna de motstridiga sammanfattningar, och deras förmåga att upptäcka kända förvrängningar mäts och analyseras. Resultaten tyder på att språkmodellerna visar lovande resultat vid upptäckt av binära kvalitativa problem, såsom faktafel. Dock varierar tillförlitligheten hos utvärderingen beroende på vilken språkmodell som används och de specifika frågorna som ställs. Ytterligare forskning krävs för att validera tillförlitligheten och generaliserbarheten hos resultaten, särskilt när det gäller subjektiva dimensioner där resultaten är osäkra. Trots detta ger detta arbete insikter som kan utgöra en grund för framtida framsteg inom området för automatisk textutvärdering.
30

Cookie Monsters : Using Large Language Models to Measure GDPR Compliance in Cookie Banners Automatically

Otterström, Marcus, Palonkorpi, Oliver January 2023 (has links)
There is a widespread problem of cookie banners not being compliant with the General Data Protection Regulation (GDPR), which negatively impacts user experience and violates personal data rights. To mitigate this issue, strides need to be made in violation detection to assist developers, designers, lawyers, organizations, and authorities in designing and enforcing GDPR-compliant cookie banners. In this thesis, we present a novel method and an open-source tool for automatically analyzing the GDPR compliance of cookie banners. The tool uniquely leverages large language models together with static code analysis to locate and analyze any cookie banner, using only the website address as input. Informed by the Design Science Research methodology, our research process involved interviews with GDPR legal experts and a thorough review of current literature in order to understand the problem context and define the objectives for our solution. After an initial version of the tool was created, an evaluation was performed by a GDPR legal expert. The feedback revealed that even at this early development stage, the tool approaches the capabilities of a trained eye, which illustrates its potential. Furthermore, our proposed method is generalizable and can be used under many domains to solve various problems (e.g., more generalized web scraping). However, further development and testing with the help of legal experts is required to enhance the tool's accuracy and validity.

Page generated in 0.1106 seconds