• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 31
  • 25
  • 22
  • 9
  • 8
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 430
  • 206
  • 161
  • 156
  • 150
  • 136
  • 112
  • 102
  • 92
  • 80
  • 77
  • 73
  • 73
  • 71
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Using Blockchain to Ensure Reputation Credibility in Decentralized Review Management

Zaccagni, Zachary James 12 1900 (has links)
In recent years, there have been incidents which decreased people's trust in some organizations and authorities responsible for ratings and accreditation. For a few prominent examples, there was a security breach at Equifax (2017), misconduct was found in the Standard & Poor's Ratings Services (2015), and the Accrediting Council for Independent Colleges and Schools (2022) validated some of the low-performing schools as delivering higher standards than they actually were. A natural solution to these types of issues is to decentralize the relevant trust management processes using blockchain technologies. The research problems which are tackled in this thesis consider the issue of trust in reputation for assessment and review credibility at different angles, in the context of blockchain applications. We first explored the following questions. How can we trust courses in one college to provide students with the type and level of knowledge which is needed in a specific workplace? Micro-accreditation on a blockchain was our solution, including using a peer-review system to determine the rigor of a course (through a consensus). Rigor is the level of difficulty in regard to a student's expected level of knowledge. Currently, we make assumptions about the quality and rigor of what is learned, but this is prone to human bias and misunderstandings. We present a decentralized approach that tracks student records throughout the academic progress at a school and helps to match employers' requirements to students' knowledge. We do this by applying micro-accredited topics and Knowledge Units (KU) defined by NSA's Center of Academic Excellence to courses and assignments. We demonstrate that the system was successful in increasing accuracy of hires through simulated datasets, and that it is efficient, as well as scalable. Another problem is how can we trust that the peer reviews are honest and reflect an accurate rigor score? Assigning reputation to peers is a natural method to ensure correctness of these assessments. The reputation of the peers providing rigor scores needs to be taken into account for an overall rigor of a course, its topics, and its tasks. Specifically, those with a higher reputation should have more influence on the total score. Hence, we focused on how a peer's reputation is managed. We explored decentralized reputation management for the peers, choosing a decentralized marketplace as a sample application. We presented an approach to ensuring review credibility, which is a particular aspect of trust in reviews and reputation of the parties who provide them. We use a Proof-of-Stake based Algorand system as a base of our implementation, since this system is open-source, and it has a rich community support. Specifically, we directly map reputation to stake, which allows us to deploy Algorand at the blockchain layer. Reviews are analyzed by the proposed evaluation component using Natural Language Processing (NLP). In our system, NLP gauges the positivity of the written review, compares that value to a scaled numerical rating given, and determines adjustments to a peer's reputation from that result. We demonstrate that this architecture ensures credible and trustworthy assessments. It also efficiently manages the reputation of the peers, while keeping reasonable consensus times. We then turned our focus on ensuring that a peer's reputation is credible. This led us to introducing a new type of consensus called "Proof-of-Review". Our proposed implementation is again based on Algorand, since its modular architecture allows for easy modifications, such as adding extra components, but this time, we modified the engine. The proposed model then provides a trust in evaluations (review and assessment credibility) and in those who provide them (reputation credibility) using a blockchain. We introduce a blacklisting component, which prevents malicious nodes from participating in the protocol, and a minimum-reputation component, which limits the influence of under-performing users. Our results showed that the proposed blockchain system maintains liveliness and completeness. Specifically, blacklisting and the minimum-reputation requirement (when properly tuned) do not affect these properties. We note that the Proof-of-Review concept can be deployed in other types of applications with similar needs of trust in assessments and the players providing them, such as sensor arrays, autonomous car groups (caravans), marketplaces, and more.
402

Comparative Analysis of User Satisfaction Between Keyword-based and GPT-based E-commerce Chatbots : A qualitative study utilizing user testing to compare user satisfaction based on the IKEA chatbot.

Bitinas, Romas, Hassellöf, Axel January 2024 (has links)
Chatbots are computer programs that interact with users utilizing natural language. Businesses benefit from using chatbots as they can provide a better and more satisfactory customer experience. This thesis investigates differences in user satisfaction with two types of e-commerce chatbots: a keyword-based chatbot and a GPT-based chatbot. The study focuses on user interactions with IKEA's chatbot "Billie" compared to a prototype GPT-based chatbot designed for similar functionalities. Using a within-subjects experimental design, participants were tasked with typical e-commerce queries, followed by interviews to gather qualitative data about each participants experience. The research aims to determine whether a chatbot based on GPT technology can offer a more intuitive, engaging and empathetic user experience, compared to traditional keyword-based chatbots in the realm of e-commerce. Findings reveal that the GPT-based chatbot generally provided more accurate and relevant responses, enhancing user satisfaction. Participants appreciated the GPT chatbot's better comprehension and ability to handle natural language, though both systems still exhibited some unnatural interactions. The keyword-based chatbot often failed to understand user intent accurately, leading to user frustration and lower satisfaction. These results suggest that integrating advanced AI technologies like GPT-based chatbots could improve user satisfaction in e-commerce settings, highlighting the potential for more human-like and effective customer service.
403

Parametric Optimal Design Of Uncertain Dynamical Systems

Hays, Joseph T. 02 September 2011 (has links)
This research effort develops a comprehensive computational framework to support the parametric optimal design of uncertain dynamical systems. Uncertainty comes from various sources, such as: system parameters, initial conditions, sensor and actuator noise, and external forcing. Treatment of uncertainty in design is of paramount practical importance because all real-life systems are affected by it; not accounting for uncertainty may result in poor robustness, sub-optimal performance and higher manufacturing costs. Contemporary methods for the quantification of uncertainty in dynamical systems are computationally intensive which, so far, have made a robust design optimization methodology prohibitive. Some existing algorithms address uncertainty in sensors and actuators during an optimal design; however, a comprehensive design framework that can treat all kinds of uncertainty with diverse distribution characteristics in a unified way is currently unavailable. The computational framework uses Generalized Polynomial Chaos methodology to quantify the effects of various sources of uncertainty found in dynamical systems; a Least-Squares Collocation Method is used to solve the corresponding uncertain differential equations. This technique is significantly faster computationally than traditional sampling methods and makes the construction of a parametric optimal design framework for uncertain systems feasible. The novel framework allows to directly treat uncertainty in the parametric optimal design process. Specifically, the following design problems are addressed: motion planning of fully-actuated and under-actuated systems; multi-objective robust design optimization; and optimal uncertainty apportionment concurrently with robust design optimization. The framework advances the state-of-the-art and enables engineers to produce more robust and optimally performing designs at an optimal manufacturing cost. / Ph. D.
404

<b>Information Extraction from Pilot Weather Reports (PIREPs) using a Structured Two-Level Named Entity Recognition (NER) Approach</b>

Shantanu Gupta (18881197) 03 July 2024 (has links)
<p dir="ltr">Weather conditions such as thunderstorms, wind shear, snowstorms, turbulence, icing, and fog can create potentially hazardous flying conditions in the National Airspace System (NAS) (FAA, 2021). In general aviation (GA), hazardous weather conditions are most likely to cause accidents with fatalities (FAA, 2013). Therefore, it is critical to communicate weather conditions to pilots and controllers to increase awareness of such conditions, help pilots avoid weather hazards, and improve aviation safety (NTSB, 2017b). Pilot Reports (PIREPs) are one way to communicate pertinent weather conditions encountered by pilots (FAA, 2017a). However, in a hazardous weather situation, communication adds to pilot workload and GA pilots may need to aviate and navigate to another area before feeling safe enough to communicate the weather conditions. The delay in communication may result in PIREPs that are both inaccurate and untimely, potentially misleading other pilots in the area with incorrect weather information (NTSB, 2017a). Therefore, it is crucial to enhance the PIREP submission process to improve the accuracy, timeliness, and usefulness of PIREPs, while simultaneously reducing the need for hands-on communication.</p><p dir="ltr">In this study, a potential method to incrementally improve the performance of an automated spoken-to-coded-PIREP system is explored. This research aims at improving the information extraction model within the spoken-to-coded-PIREP system by using underlying structures and patterns in the pilot spoken phrases. The first part of this research is focused on exploring the structural elements, patterns, and sub-level variability in the Location, Turbulence, and Icing pilot phrases. The second part of the research is focused on developing and demonstrating a structured two-level Named Entity Recognition (NER) model that utilizes the underlying structures within pilot phrases. A structured two-level NER model is designed, developed, tested, and compared with the initial single level NER model in the spoken-to-coded-PIREP system. The model follows a structured approach to extract information at two levels within three PIREP information categories – Location, Turbulence, and Icing. The two-level NER model is trained and tested using a total of 126 PIREPs containing Turbulence and Icing weather conditions. The performance of the structured two-level NER model is compared to the performance of a comparable single level initial NER model using three metrics – precision, recall, and F1-Score. The overall F1-Score of the initial single level NER model was in the range of 68% – 77%, while the two-level NER model was able to achieve an overall F1-Score in the range of 89% – 92%. The two-level NER model was successful in recognizing and labelling specific phrases into broader entity labels such as Location, Turbulence, and Icing, and then processing those phrases to segregate their structural elements such as Distance, Location Name, Turbulence Intensity, and Icing Type. With improvements to the information extraction model, the performance of the overall spoken-to-coded-PIREP system may be increased and the system may be better equipped to handle the variations in pilot phrases and weather situations. Automating the PIREP submission process may reduce the pilot’s hands-on task-requirement in submitting a PIREP during hazardous weather situations, potentially increase the quality and quantity of PIREPs, and share accurate weather-related information in a timely manner, ultimately making GA flying safter.</p>
405

Natural Language Based AI Tools in Interaction Design Research : Using ChatGPT for Qualitative User Research Insight Analysis

Saare, Karmen January 2024 (has links)
This thesis investigates the use of Artificial Intelligence, specifically the Large Language Model (LLM) application ChatGPT in the context of qualitative user research, with the goal of enhancing the user research interview analysis process. Through an empirical study where ChatGPT was used in the process of a typical user research insight analysis, the limitations and opportunities of the AI tool are examined. The study's results highlight the most significant insights from the empirical investigation, serving as examples to raise awareness of the implications of using ChatGPT in the context of user interview analysis. The study concludes that ChatGPT has the potential to enhance the interpretation of primarily individual interviews by generating well-articulated summaries, provided their accuracy can be verified. Additionally, ChatGPT may be particularly useful in low-risk design projects where the consequences of potential misinterpretations are minimal. Finally, the significance of clearly articulated written instructions for ChatGPT for best results is pointed out.
406

Event-Cap – Event Ranking and Transformer-based Video Captioning / Event-Cap – Event rankning och transformerbaserad video captioning

Cederqvist, Gabriel, Gustafsson, Henrik January 2024 (has links)
In the field of video surveillance, vast amounts of data are gathered each day. To be able to identify what occurred during a recorded session, a human annotator has to go through the footage and annotate the different events. This is a tedious and expensive process that takes up a large amount of time. With the rise of machine learning and in particular deep learning, the field of both image and video captioning has seen large improvements. Contrastive Language-Image Pretraining is capable of efficiently learning a multimodal space, thus able to merge the understanding of text and images. This enables visual features to be extracted and processed into text describing the visual content. This thesis presents a system for extracting and ranking important events from surveillance videos as well as a way of automatically generating a description of the event. By utilizing the pre-trained models X-CLIP and GPT-2 to extract visual information from the videos and process it into text, a video captioning model was created that requires very little training. Additionally, the ranking system was implemented to extract important parts in video, utilizing anomaly detection as well as polynomial regression. Captions were evaluated using the metrics BLEU, METEOR, ROUGE and CIDEr, and the model receives scores comparable to other video captioning models. Additionally, captions were evaluated by experts in the field of video surveillance, who rated them on accuracy, reaching up to 62.9%, and semantic quality, reaching 99.2%. Furthermore the ranking system was also evaluated by the experts, where they agree with the ranking system 78% of the time. / Inom videoövervakning samlas stora mängder data in varje dag. För att kunna identifiera vad som händer i en inspelad övervakningsvideo så måste en människa gå igenom och annotera de olika händelserna. Detta är en långsam och dyr process som tar upp mycket tid. Under de senaste åren har det setts en enorm ökning av användandet av olika maskininlärningsmodeller. Djupinlärningsmodeller har fått stor framgång när det kommer till att generera korrekt och trovärdig text. De har också använts för att generera beskrivningar för både bilder och video. Contrastive Language-Image Pre-training har gjort det möjligt att träna en multimodal rymd som kombinerar förståelsen av text och bild. Detta gör det möjligt att extrahera visuell information och skapa textbeskrivningar. Denna master uppsatts beskriver ett system som kan extrahera och ranka viktiga händelser i en övervakningsvideo samt ett automatiskt sätt att generera beskrivningar till dessa. Genom att använda de förtränade modellerna X-CLIP och GPT-2 för att extrahera visuell information och textgenerering, har en videobeskrivningsmodell skapats som endast behöver en liten mängd träning. Dessutom har ett rankingsystem implementerats för att extrahera de viktiga delarna i en video genom att använda anomalidetektion och polynomregression. Video beskrivningarna utvärderades med måtten BLEU, METOER, ROUGE och CIDEr, där modellerna får resultat i klass med andra videobeskrivningsmodeller. Fortsättningsvis utvärderades beskrivningarna också av experter inom videoövervakningsområdet där de fick besvara hur bra beskrivningarna var i måtten: beskrivningsprecision som uppnådde 62.9% och semantisk kvalité som uppnådde 99.2%. Ranknignssystemet utvärderades också av experterna. Deras åsikter överensstämde till 78% med rankningssystemet.
407

Clustering and Anomaly detection using Medical Enterprise system Logs (CAMEL) / Klustring av och anomalidetektering på systemloggar

Ahlinder, Henrik, Kylesten, Tiger January 2023 (has links)
Research on automated anomaly detection in complex systems by using log files has been on an upswing with the introduction of new deep-learning natural language processing methods. However, manually identifying and labelling anomalous logs is time-consuming, error-prone, and labor-intensive. This thesis instead uses an existing state-of-the-art method which learns from PU data as a baseline and evaluates three extensions to it. The first extension provides insight into the performance of the choice of word em-beddings on the downstream task. The second extension applies a re-labelling strategy to reduce problems from pseudo-labelling. The final extension removes the need for pseudo-labelling by applying a state-of-the-art loss function from the field of PU learning. The findings show that FastText and GloVe embeddings are viable options, with FastText providing faster training times but mixed results in terms of performance. It is shown that several of the methods studied in this thesis suffer from sporadically poor performances on one of the datasets studied. Finally, it is shown that using modified risk functions from the field of PU learning provides new state-of-the-art performances on the datasets considered in this thesis.
408

Direct Preference Optimization for Improved Technical WritingAssistance : A Study of How Language Models Can Support the Writing of Technical Documentation at Saab / En studie i hur språkmodeller kan stödja skrivandet av teknisk dokumentation på Saab

Bengtsson, Hannes, Habbe, Patrik January 2024 (has links)
This thesis explores the potential of Large Language Models (LLMs) to assist in the technical documentation process at Saab. With the increasing complexity and regulatory demands on such documentation, the objective is to investigate advanced natural language processing techniques as a means of streamlining the creation of technical documentation. Although many standards exist, this thesis particularly focuses on the standard ASD-STE100, Simplified Technical English abbrv. STE, a controlled language for technical documentation. STE's primary aim is to ensure that technical documents are understandable to individuals regardless of their native language or English proficiency.  The study focuses on the implementation of Direct Preference Optimization (DPO) and Supervised Instruction Fine-Tuning (SIFT) to refine the capabilities of LLMs in producing clear and concise outputs that comply with STE. Through a series of experiments, we investigate the effectiveness of LLMs in interpreting and simplifying technical language, with a particular emphasis on adherence to STE standards. The study utilizes a dataset comprised of target data paired with synthetic source data generated by a LLM. We apply various model training strategies, including zero-shot performance, supervised instruction fine-tuning, and direct preference optimization. We evaluate the various models' output using established quantitative metrics for text simplification and substitute human evaluators with company internal software for evaluating adherence to company standards and STE. Our findings suggest that while LLMs can significantly contribute to the technical writing process, the choice of training methods and the quality of data play crucial roles in the model's performance. This study shows how LLMs can improve productivity and reduce manual work. It also looks at the problems and suggests ways to make technical documentation automation better in the future.
409

Marco Polo's Travels Revisited: From Motion Event Detection to Optimal Path Computation in 3D Maps

Niekler, Andreas, Wolska, Magdalena, Wiegmann, Matti, Stein, Benno, Burghardt, Manuel, Thiel, Marvin 11 July 2024 (has links)
In this work, we present a workflow for semi-automatic extraction of geo-references and motion events from the book 'The Travels of Marco Polo'. These are then used to create 3D renderings of the space and movement which allows readers to visually trace Marco Polo's route themselves to provide the exprience of the entirety of the journey
410

Facilitating forgiveness: an NLP approach to forgiving

Von Krosigk, Beate Christine 31 May 2004 (has links)
Facilitating forgiveness: an NLP approach to forgiving is an attempt at uncovering features of the blocks that prevent people to forgive. These blocks to forgiveness can be detected in the real life situations of the six individuals who told me their stories. The inner thoughts, feelings and the subsequent behaviour that prevented them from forgiving others is clearly uncovered in their stories. The facilitation process highlights the features that created the blocks in the past thus preventing forgiveness to occur. The blocks with their accompanying features reveal what needs to be clarified or changed in order to eventually enable the hurt individuals to forgive those who have hurt them. The application of discourse analysis to the stories of hurt highlights the links between the real life stories of the individuals within their contexts with regard to unforgiveness to the research findings of the existing body of knowledge, thereby creating a complexly interwoven comprehensive understanding of the individuals' thoughts, feelings, and behaviours in conjunction with their developmental phases within their socio-cultural contexts. Neuro-linguistic-programming (NLP) is the instrument with which forgiving is facilitated in the six individuals who expressed their conscious desire to forgive, because they were unable to do so on their own. Their emotions had the habit of keeping them in a place in which they were forced to relive the hurtful event as if it were happening in the present. Arresting the process of reliving negative emotions requires a new way of being in this world. The assumption that this can be learnt is based on the results from a previous study, in which forgiveness was uncovered by means of the grounded theory approach as a cognitive process (Von Krosigk, 2000). The results from the previous research in conjunction with the results and insights from this research study are presented in the form of a grounded theory model of forgiveness. / Psychology / D. Litt. et Phil. (Psychology)

Page generated in 0.0486 seconds