• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 10
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 125
  • 37
  • 20
  • 18
  • 17
  • 17
  • 16
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Relocation of a neutron capture prompt gamma-ray analysis facility at the University of Missouri Research Reactor and measurement of boron in various materials /

Lai, Chao-Jen, January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 112-118). Also available on the Internet.
22

Changing the Narrative Perspective: A New Language Processing Task and Machine Learning Approaches

Chen, Mike 23 May 2022 (has links)
No description available.
23

Manding for Information Maintained by Social Reinforcement: A Comparison of Prompting Procedures

Swerdan, Matthew G. 31 May 2013 (has links)
No description available.
24

Responsible AI in Educational Chatbots: Seamless Integration and Content Moderation Strategies / Ansvarsfull AI i pedagogiska chatbots: strategier för sömlös integration och moderering av innehåll

Eriksson, Hanna January 2024 (has links)
With the increasing integration of artificial intelligence (AI) technologies into educational settings, it becomes important to ensure responsible and effective use of these systems. This thesis addresses two critical challenges within AI-driven educational applications: the effortless integration of different Large Language Models (LLMs) and the mitigation of inappropriate content. An AI assistant chatbot was developed, allowing teachers to design custom chatbots and set rules for them, enhancing students’ learning experiences. Evaluation of LangChain as a framework for LLM integration, alongside various prompt engineering techniques including zero-shot, few-shot, zero-shot chain-of-thought, and prompt chaining, revealed LangChain’s suitability for this task and highlighted prompt chaining as the most effective method for mitigating inappropriate content in this use case. Looking ahead, future research could focus on further exploring prompt engineering capabilities and strategies to ensure uniform learning outcomes for all students, as well as leveraging LangChain to enhance the adaptability and accessibility of educational applications.
25

Prompt Engineering: Toward a Rhetoric and Poetics for Neural Network Augmented Authorship in Composition and Rhetoric

Foley, Christopher 01 January 2024 (has links) (PDF)
My dissertation introduces the notion of "augmented authorship" and applications for prompt engineering with generative neural networks inspired by Gregory Ulmer's theories of electracy (2003) to the interdisciplinary fields that teach writing and rhetoric. With the goal of inspiring the general practice of electracy, I introduce prompt engineering as practice in flash reason (Ulmer 2008; 2012), a new collective prudence emerging from the apparatus of electracy. By situating electracy and flash reason as threshold concepts in writing studies, and by aligning principles of electracy with ACRL and NCTE digital literacy frameworks, I demonstrate how prompt engineering across modalities can help students meet digital literacy goals, before providing accessible models or "relays" in the form of AI-coauthored texts, course modules, and aesthetic models deployed in the game world Roblox.
26

Toward the Clinical Application of the Prompt Gamma-Ray Timing Method for Range Verification in Proton Therapy

Petzoldt, Johannes 08 May 2017 (has links)
The prompt gamma-ray timing (PGT) method offers a relatively simple approach for range verification in proton therapy. Starting from the findings of previous experiments, several steps toward a clinical application of PGT have been performed in this work. First of all, several scintillation materials have been investigated in the context of PGT. The time resolution was determined at high photon energies in the MeV-region. In conclusion, the fast and bright scintillator CeBr3 is the material of choice in combination with a timing photomultiplier tube as light detector. A second study was conducted at Universitäts Protonen Therapie Dresden (UPTD) to characterize the proton bunch structure of a clinical beam concerning its time width and relative arrival time. The data is mandatory as input for simulation studies and to correct for phase drifts. The obtained data could furthermore be used for the first 2D imaging of a heterogeneous phantom based on prompt gamma-rays. In a last step, a PGT prototype system was designed using the findings from the first two studies. The prototype system is based on a newly developed digital spectrometer and a CeBr3 detector. The device is characterized at the ELBE bremsstrahlung beam. It was verified that the prototype operates within the specifications concerning time and resolution as well as throughput rate. Finally, for the first time the PGT system was used under clinical conditions in the treatment room of UPTD. Here, PGT data was obtained from the delivery of a three-dimensional treatment plan onto PMMA phantoms. The spot-by-spot analysis helped to investigate the performance of the prototype device under clinical conditions. As a result, range variations of 5 mm could be detected for the first time with an uncollimated system at clinically relevant doses. To summarize, the obtained results help to bring PGT closer to a clinical application.
27

Exploring GPT models as biomedical knowledge bases : By evaluating prompt methods for extracting information from language models pre-trained on scientific articles

Hellberg, Ebba January 2023 (has links)
Scientific findings recorded in literature help continuously guide scientific advancements, but manual approaches to accessing that knowledge are insufficient due to the sheer quantity of information and data available. Although pre-trained language models are being explored for their utility as knowledge bases and structured data repositories, there is a lack of research for this application in the biomedical domain. Therefore, the aim in this project was to determine how Generative Pre-trained Transformer models pre-trained on articles in the biomedical domain can be used to make relevant information more accessible. Several models (BioGPT, BioGPT-Large, and BioMedLM) were evaluated on the task of extracting chemical-protein relations between entities directly from the models through prompting. Prompts were formulated as a natural language text or an ordered triple, and provided in different settings (few-shot, one-shot, or zero-shot). Model-predictions were evaluated quantitatively as a multiclass classification task using a macro-averaged F1-score. The result showed that out of the explored methods, the best performance for extracting chemical-protein relations from article-abstracts was obtained using a triple-based text prompt on the largest model, BioMedLM, in the few-shot setting, albeit with low improvements from the baseline (+0.019 F1). There was no clear pattern for which prompt setting was favourable in terms of task performance, however, the triple based prompt was generally more robust than the natural language formulation. The task performance of the two smaller models underperformed the random baseline (by at best -0.026 and -0.001 F1). The impact of the prompt method was minimal in the smallest model, and the one-shot setting was the least sensitive to the prompt formulation in all models. However, there were more pronounced differences between the prompt methods in the few-shot setting of the larger models (+0.021-0.038 F1). The results suggested that the method of prompting and the size of the model impact the knowledge eliciting performance of a language model. Admittedly, the models mostly underperformed the baseline and future work needs to look into how to adapt generative language models to solve this task. Future research could investigate what impact automatic prompt-design methods and larger in-domain models have on the model performance. / De vetenskapliga upptäckter som presenteras inom litteraturen vägleder kontinuerligt vetenskapliga framsteg. Manuella tillvägagångssätt för att ta del av den kunskapen är otillräckliga på grund av den enorma mängd information och data som finns tillgänglig. Även om för-tränade språkmodeller utforskas för sin brukbarhet som kunskapsbaser och strukturerade dataförråd så finns det en brist på forskning inom den biomedicinska domänen. Målet med detta projekt var att utreda hur Generative Pre-trained Transformer (GPT) modeller för-tränade på biomedicinska artiklar kan användas för att öka tillgängligheten av relevant information inom denna domän. Olika modeller (BioGPT, BioGPT-Large, och BioMedLM) utvärderas på uppgiften att extrahera relationsinformation mellan entiteter direkt ur modellen genom en textprompt. En prompt formuleras genom naturlig text och som en ordnad trippel, och används i olika demonstrationsmiljöer (few-shot, one-shot, zero-shot). Modellförutsägelser utvärderas kvantitativt som ett multi-klass klassifikationsproblem genom ett genomsnittligt F1 värde. Resultatet indikerade att kemikalie-protein relationer från vetenskapliga artikelsammanfattningar kan extraheras med en högre sannolikhet än slumpen med en trippelbaserad prompt genom den största modellen, BioMedLM, i few-shot-miljön, dock med små förbättringar från baslinjen (+0.019 F1). Resultatet visade inga tydliga mönster gällande vilken demonstrationsmiljö som var mest gynnsam, men den trippelbaserade formuleringen var generellt mer robust än formuleringen som följde naturligt språk. Uppgiftsprestandan på de två mindre modellerna underpresterade den slumpmässiga baslinjen (med som bäst -0.026 och -0.001 F1). Effekten av valet av promptmetod var minimal med den minsta modellen, och one-shot-miljön var minst känslig för olika formuleringar hos alla modeller. Dock fanns det mer markanta skillnader mellan promptmetoder i few-shot-miljön hos de större modellerna (+0.021-0.038 F1). Resultatet antydde att valet av promptmetod och storleken på modell påverkar modellens förmåga att extrahera information. De utvärderade modellerna underpresterade dock baslinjen och fortsatt efterforskning behöver se över hur generativa språkmodeller kan anpassas för att lösa denna uppgift. Framtida forskning kan även undersöka vilken effekt automatiska promptdesignmetoder och större domänmodeller har på modellprestanda.
28

Utveckling av en anonymiseringsprototyp för säker interaktion med chatbotar

Hanna, John Nabil, Berjlund, William January 2024 (has links)
I denna studie presenteras en prototyp för anonymisering av känslig information itextdokument, med syfte att möjliggöra säker interaktion med stora språkmodeller(LLM:er), såsom ChatGPT. Prototypen erbjuder en plattform där användare kanladda upp dokument för att anonymisera specifika känsliga ord. Efter anonymiseringkan användare ställa frågor till ChatGPT baserat på det anonymiserade innehållet.Prototypen återställer de anonymiserade delarna i svaren från ChatGPT innan de visas för användaren, vilket säkerställer att känslig information förblir skyddad underhela interaktionen.I studien används metoden Design Science Research in Information Systems (DSRIS). Prototypen utvecklas i Java och testas med påhittade dokument, medan enkätsvar samlasin för att utvärdera användarupplevelsen.Resultaten visar att prototypens funktioner fungerar väl och skyddar känslig information vid interaktionen med ChatGPT. Prototypen har utvärderats med hjälp av svarfrån enkäten som dessutom tar upp förbättringsmöjligheter.Avslutningsvis visar studien att det är möjligt att anonymisera textdokument effektivt och samtidigt få korrekt och användbar feedback från ChatGPT. Trots vissa begränsningar i användargränssnittet på grund av tidsramen visar studien på potentialför säker datahantering med ChatGPT. / This study presents a prototype for anonymizing sensitive information in text documents, with the aim of enabling secure interactions with large language models(LLMs) such as ChatGPT. The prototype offers a platform where users can uploaddocuments to anonymize specific sensitive words. After anonymization, users canpose questions to ChatGPT based on the anonymized content. The prototype restores the anonymized parts in the responses from ChatGPT before they are displayed to the user, ensuring that sensitive information remains protected throughoutthe entire interaction.The study uses the Design Science Research in Information Systems (DSRIS)method. The prototype is developed in Java and tested with fabricated documents,while survey responses were collected to evaluate the user experience.The results show that the prototype's functionalities work well and protect sensitiveinformation during interaction with ChatGPT. The prototype has been evaluated using survey responses that also highlight opportunities for improvement.In conclusion, the study demonstrates that it is possible to effectively anonymizetext documents while obtaining accurate and useful feedback from ChatGPT. Despite some limitations in the user interface due to the timeframe, the study showspotential for secure data handling with ChatGPT.
29

Accelerated clinical prompt gamma simulations for proton therapy / Simulations cliniques des gamma prompt accélérées pour la Hadronthérapie

Huisman, Brent 19 May 2017 (has links)
Après une introduction à l’hadronthérapie et à la détection gamma prompts, cette thèse de doctorat comprend deux contributions principales: le développement d'une méthode de simulation des gamma prompt (PG) et son application dans une étude de la détection des changements dans les traitements cliniques. La méthode de réduction de variance (vpgTLE) est une méthode d'estimation de longueur de piste en deux étapes développée pour estimer le rendement en PG dans les volumes voxélisés. Comme les particules primaires se propagent tout au long de la CT du patient, les rendements de PG sont calculés en fonction de l'énergie actuelle du primaire, du matériau du voxel et de la longueur de l'étape. La deuxième étape utilise cette image intermédiaire comme source pour générer et propager le nombre de PG dans le reste de la géométrie de la scène, par exemple Dans un dispositif de détection. Pour un fantôme hétérogéné et un plan de traitement CT complet par rapport à MC analogue, à un niveau de convergence de 2% d'incertitude relative sur le rendement de PG par voxel dans la région de rendement de 90%, un gain d'environ 10^3 A été atteint. La méthode s'accorde avec les simulations analogiques MC de référence à moins de 10^-4 par voxel, avec un biais négligeable. La deuxième étude majeure menée dans portait sur l'estimation PG FOP dans les simulations cliniques. Le nombre de protons (poids spot) requis pour une estimation FOP constante a été étudié pour la première fois pour deux caméras PG optimisées, une fente multi-parallèle (MPS) et une conception de bordure de couteau (KES). Trois points ont été choisis pour une étude approfondie et, au niveau des points prescrits, on a constaté qu'ils produisaient des résultats insuffisants, ce qui rend improbable la production clinique utilisable sur le terrain. Lorsque le poids spot est artificiellement augmenté à 10^9 primaires, la précision sur le FOP atteint une précision millimétrique. Sur le décalage FOP, la caméra MPS fournit entre 0,71 - 1,02 mm (1sigma) de précision pour les trois points à 10 $ 9 $ de protons; Le KES entre 2.10 - 2.66 mm. Le regroupement de couches iso-énergétiques a été utilisé dans la détection par PG de distribution passive pour l'un des prototypes d'appareils PG. Dans le groupement iso-depth, activé par la livraison active, les taches avec des chutes de dose distales similaires sont regroupées de manière à fournir des retombées bien définies comme tentative de mélange de gamme de distance. Il est démontré que le regroupement de taches n'a pas nécessairement une incidence négative sur la précision par rapport à la tache artificiellement accrue, ce qui signifie qu'une certaine forme de groupage de points peut permettre l'utilisation clinique de ces caméras PG. Avec tous les spots ou les groupes spot, le MPS a un meilleur signal par rapport au KES, grâce à une plus grande efficacité de détection et à un niveau de fond inférieur en raison de la sélection du temps de vol. / After an introduction to particle therapy and prompt gamma detection, this doctoral dissertation comprises two main contributions: the development of a fast prompt gammas (PGs) simulation method and its application in a study of change detectability in clinical treatments. The variance reduction method (named vpgTLE) is a two-stage track length estimation method developed to estimate the PG yield in voxelized volumes. As primary particles are propagated throughout the patient CT, the PG yields are computed as function of the current energy of the primary, the material in the voxel and the step length. The second stage uses this intermediate image as a source to generate and propagate the number of PGs throughout the rest of the scene geometry, e.g. into a detection device. For both a geometrical heterogeneous phantom and a complete patient CT treatment plan with respect to analog MC, at a convergence level of 2\% relative uncertainty on the PG yield per voxel in the 90\% yield region, a gain of around $10^3$ was achieved. The method agrees with reference analog MC simulations to within $10^{-4}$ per voxel, with negligible bias. The second major study conducted in this PhD program was on PG FOP estimation in clinical simulations. The number of protons (spot weight) required for a consistent FOP estimate was investigated for the first time for two optimized PG cameras, a multi-parallel slit (MPS) and a knife edge design (KES). Three spots were selected for an in depth study, and at the prescribed spot weights were found to produce results of insufficient precision, rendering usable clinical output on the spot level unlikely. When the spot weight is artificially increased to $10^9$ primaries, the precision on the FOP reaches millimetric precision. On the FOP shift the MPS camera provides between 0.71 - 1.02 mm (1$\upsigma$) precision for the three spots at $10^9$ protons; the KES between 2.10 - 2.66 mm. Grouping iso-energy layers was employed in passive delivery PG detection for one of the PG camera prototypes. In iso-depth grouping, enabled by active delivery, spots with similar distal dose fall-offs are grouped so as to provide well-defined fall-offs as an attempt to sidestep range mixing. It is shown that grouping spots does not necessarily negatively affect the precision compared to the artificially increased spot, which means some form of spot grouping can enable clinical use of these PG cameras. With all spots or spot groups the MPS has a better signal compared to the KES, thanks to a larger detection efficiency and a lower background level due to time of flight selection.
30

Transforming Education into Chatbot Chats : The implementation of Chat-GPT to prepare educational content into a conversational format to be used for practicing skills / Omvandla utbildningsmaterial till chattbot-samtal : Implementeringen av Chat-GPT för att förbereda utbildningsmaterial till konversationsbaserat format för inlärnings syften

Wickman, Simon, Zandin, Philip January 2023 (has links)
In this study we explore the possibility of using ChatGPT, to summarise large contents of educational content and put it in a template that later can be used for dialogue purposes and will explore the challenges and solutions that occur during the implementation. Today there is a problem for users to create wellmade prompts for learning scenarios that fulfill all the requirements set by the user. This problem is significant as it addresses the challenges of information overload and how generating prompts for dialogue purposes can be trivialized for users. We solved this problem by doing an implementation for the company Fictive Reality in their application, conducting research, and performing tests. The implementation was made with the help of OpenAI’s application programming interface, ChatGPT-4 which is a model that is popular due to its wide range of domain knowledge, and we connected it to a web page where users could upload text or audio files. The method to find a suitable prompt to summarise text was primarily through experimentation supported by previous research. We used automatic metrics for evaluation like ROUGE, BERTScore, and ChatGPT(Self-Evaluation), we also had users give feedback on the implementation and quality of the result. This study shows that ChatGPT effectively summarizes extensive educational content and transforms it into dialogue templates for ChatGPT to use. The research demonstrates streamlined and improved prompt creation, addressing the challenges of information overload. The efficiency and quality were either equal to or surpassed user-generated prompts while preserving almost relevant information, and reduced the time-consumption of this task by a substantial margin. The biggest struggle we had was getting ChatGPT to grasp our instructions. However, with research and with an iterative approach the process became much smoother. ChatGPT exhibits robust potential for enhancing educational prompt generation. Future work could be dedicated to improving the prompt further, by making it more flexible. / I denna studie utforskar vi möjligheten att använda ChatGPT för att sammanfatta stora mängder utbildningsinnehåll och placera det i en mall som senare kan användas för dialogändamål. Vi kommer att undersöka de utmaningar och lösningar som uppstår under implementeringen. Idag finns det ett problem för användare att skapa välgjorda uppmaningar för lärandescenarier som uppfyller alla krav som användaren ställer. Detta problem är betydande då det tar upp utmaningarna med informationsöverbelastning och hur generering av uppmaningar för dialogändamål kan förenklas för användare. Vi löste detta problem genom att göra en implementation hos Fictive Reality där vi gjorde forskning, tester och programvara. Implementeringen gjordes med hjälp av OpenAI:s applikationsprogrammeringsgränssnitt, ChatGPT-4, som är en modell som är populär på grund av dess breda kunskap inom olika områden. Vi anslöt den till en webbsida där användare kunde ladda upp text- eller ljudfiler. Metoden för att hitta en lämpliga instruktioner för att sammanfatta texter var främst genom experimentering med stöd av tidigare forskning i området. Vi använde automatiska utvärderings verktyg, såsom ROUGE, BERTScore och ChatGPT (självutvärdering). Vi hade också användare som gav feedback om implementeringen och resultatets kvalitet. Denna studie visar att ChatGPT effektivt sammanfattar omfattande utbildningsinnehåll och omvandlar det till dialogmallar redo för ett lärnings scenario med ChatGPT. Forskningen visade bra resultat vid skapandet av instruktioner, vilket tacklar utmaningarna med informationsöverbelastning. Effektiviteten och kvaliteten var antingen likvärdig eller bättre än användarskapade instruktioner samtidigt som nästan all relevant information bevarades, och tidsåtgången för denna uppgift minskades avsevärt. Den största utmaningen vi stod inför var att få ChatGPT att förstå våra instruktioner. Dock blev processen mycket smidigare med forskning och en iterativ metodik. ChatGPT visar på stark potential för att förbättra genereringen av utbildningssammanfattningar. Framtida arbete kan fokusera på att ytterligare förbättra instruktionerna genom att göra den mer flexibel.

Page generated in 0.0897 seconds