• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 2
  • Tagged with
  • 17
  • 10
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

PRECISION PAIRINGS : Consultant Assignment Matching with Local Large Language Models

Arlt Strömberg, Wilmer January 2023 (has links)
This master thesis explores the application of local Large Language Models (LLMs) in the consultancy industry, specifically focusing on the challenge of matching consultants to client assignments. The study develops and evaluates a structured pipeline integrating an LLM to automate the consultantassignment matching process. The research encompasses a comprehensive methodology, and culminating in a sophisticated LLM application. The core of the thesis is an in-depth analysis of how the LLM, along with its constituent components like nodes, embedding models, and vector store indexes, contributes to the matching process. Special emphasis is placed on the role of temperature settings in the LLM and their impact on match accuracy and quality. Through methodical experimentation and evaluation, the study sheds light on the effectiveness of the LLM in accurately matching consultants to assignments and generating coherent motivations. This master thesis establishes a foundational framework for the utilization of LLMs in consultancy matching, presenting a significant step towards the integration of AI in the field. The thesis opens avenues for future research, aiming to enhance the efficiency and precision of AI-driven consultant matching in the consulting industry.
2

DEMOCRATISING DEEP LEARNING IN MICROBIAL METABOLITES RESEARCH / DEMOCRATISING DEEP LEARNING IN NATURAL PRODUCTS RESEARCH

Dial, Keshav January 2023 (has links)
Deep learning models are dominating performance across a wide variety of tasks. From protein folding to computer vision to voice recognition, deep learning is changing the way we interact with data. The field of natural products, and more specifically genomic mining, has been slow to adapt to these new technological innovations. As we are in the midst of a data explosion, it is not for lack of training data. Instead, it is due to the lack of a blueprint demonstrating how to correctly integrate these models to maximise performance and inference. During my PhD, I showcase the use of large language models across a variety of data domains to improve common workflows in the field of natural product drug discovery. I improved natural product scaffold comparison by representing molecules as sentences. I developed a series of deep learning models to replace archaic technologies and create a more scalable genomic mining pipeline decreasing running times by 8X. I integrated deep learning-based genomic and enzymatic inference into legacy tooling to improve the quality of short-read assemblies. I also demonstrate how intelligent querying of multi-omic datasets can be used to facilitate the gene signature prediction of encoded microbial metabolites. The models and workflows I developed are wide in scope with the hopes of blueprinting how these industry standard tools can be applied across the entirety of natural product drug discovery. / Thesis / Doctor of Philosophy (PhD)
3

Improving Vulnerability Description Using Natural Language Generation

Althebeiti, Hattan 01 January 2023 (has links) (PDF)
Software plays an integral role in powering numerous everyday computing gadgets. As our reliance on software continues to grow, so does the prevalence of software vulnerabilities, with significant implications for organizations and users. As such, documenting vulnerabilities and tracking their development becomes crucial. Vulnerability databases addressed this issue by storing a record with various attributes for each discovered vulnerability. However, their contents suffer several drawbacks, which we address in our work. In this dissertation, we investigate the weaknesses associated with vulnerability descriptions in public repositories and alleviate such weaknesses through Natural Language Processing (NLP) approaches. The first contribution examines vulnerability descriptions in those databases and approaches to improve them. We propose a new automated method leveraging external sources to enrich the scope and context of a vulnerability description. Moreover, we exploit fine-tuned pretrained language models for normalizing the resulting description. The second contribution investigates the need for uniform and normalized structure in vulnerability descriptions. We address this need by breaking the description of a vulnerability into multiple constituents and developing a multi-task model to create a new uniform and normalized summary that maintains the necessary attributes of the vulnerability using the extracted features while ensuring a consistent vulnerability description. Our method proved effective in generating new summaries with the same structure across a collection of various vulnerability descriptions and types. Our final contribution investigates the feasibility of assigning the Common Weakness Enumeration (CWE) attribute to a vulnerability based on its description. CWE offers a comprehensive framework that categorizes similar exposures into classes, representing the types of exploitation associated with such vulnerabilities. Our approach utilizing pre-trained language models is shown to outperform Large Language Model (LLM) for this task. Overall, this dissertation provides various technical approaches exploiting advances in NLP to improve publicly available vulnerability databases.
4

Productivity, Cost and Environmental Damage of Four Logging Methods in Forestry of Northern Iran

Badraghi, Naghimeh 04 July 2014 (has links) (PDF)
Increasing productivity, reducing cost, reducing soil damage, reducing the impact of harvesting on standing tree and regeneration are all very important objectives in ground skidding system in the management of the Hyrcanian forest. The research carried out to obtain these objectives included four logging methods, tree length method (TLM), long length method (LLM), short length method (SLM), and wood extraction by mule (mule) in northern Iran. In order to determine the cost per unit, time study techniques were used for each harvesting method, time study data are shifted to logarithmic data based on 10. On the basis of the developed models simulated, 11 skidding turns are simulated and the unit cost are estimated depending on the diameter of the log (DL), skidding distance (SD), and the winching distance (WD) for 11 different cycles with TLM, LLM and SLM. The results showed that on average, the net costs per extraction of one cubic meter of wood were 3.06, 5.69, 6.81 and 34.36 €/m3 in TLM, LLM, SLM and mule. The costs depending on diameter of log (DL), skidding distance (SD) and winching distance (WD) showed that the most economical alternative for Northern Iran is TLM. In the cut-to-length system, the costs of both alternatives LLM, SLM were significantly dependent on DL. , thus the result of this study suggests that as long as the diameter of the felled trees is less than 40 cm, the cut-to-length system is not an economical alternative, whilst the cut-to-length method can be applied for trees with a diameter more than 40 cm. Where diameters are more than 40 cm TLM it is more economical than SLM, however it was not significantly different. Depending on SD in short skidding distance SLM is preferable to LLM but in cases of long skidding distance LLM is more economical than SLM. The winching distance affect was not a factor on cost. To assess the damage on seedlings and standing trees a 100% inventory method was employed in pre-hauling and post-hauling, alongside of skidding trails, winching strips and mule hauling with a 12m width. To chose the best alternative depending on standing damage the Analysis of multiple criterial approval (MA) was applied. The amount of trees damaged by winching operation were 11.89% in TLM, 14.44% in LLM 27.59%, SLM and 0 stem and by skidding operation were 16.73%, 3.13% and 8.78% of total trees in TLM, LLM and SLM. In the winching area about 14%, 20%, 21% and 6 % of the total regeneration was damaged by TLM, LLM, SLM and mule and the skidding operation damaged 7.5% in TLM, 7.4 % LLM and 9.4% in SLM. The friendliest alternative to residual standing was mule but in manual method (where the wood extraction is done by skidder) MA showed that the best alternative depending on residual damage is LLM. To determine the degree of soil compaction a core sampling technique of bulk density was used. Soil samples collected from the horizontal face of a soil pit at 10 cm depth soil core, at 50m intervals on skid trials, in winching strips and control are (no vehicles pass) a soil sample was taken at 10m intervals in the hauling direction of the mule. In order to determine the post-harvesting extent of disturbance on skidding trails by skidding operations, the disturbed widths were measured at 50 m intervals along the skid trails. In the winching area, where the winched logs created a streak of displaced soil, the width of the displaced streak was measured at 5 m interval along the winching strip. In mule hauling operations the width of a streak created by a mule foot track was measured at 10 m intervals. To compare increased average bulk density between alternatives one way The ANOVA, Duncan test and Dunnett t-test with a 95 % confidence level were used. A General linear model was applied to relate the increasing bulk density and the slope gradient. To realize the correlation between the increment of soil bulk density and the slope gradient and the correlation between the soil compaction and soil moisture content (%) The Pearson correlation test was applied. To choose the best alternative (in manual method) a MA test was applied again. The bulk density on the skidding trail increased 51 % for 30 skidding turn, 35 % for 31 skidding turn (one unloaded and one loaded pass) and 46% for 41 skidding turn. Results of ANOVA (p < 0.05) show significant differences of bulk density between alternatives. Duncan test and the Dunnett t-test indicated that the increasing soil bulk density was not significant between control samples and winching strip of TLM and extraction by mule samples. The general linear modeling and Pearson correlation test results indicated that the slope gradient had an insignificant effect on soil compaction, whilst the Pearson test indicates a medium negative correlation between soil compaction and percentage of soil moisture. By ground-based winching operation 0.07%, 0.03%, 0.05% and 0.002% of the total area and by ground based skidding operation 1.21%, 1.67%, 0.81% and 0.00% of total area was disturbed and compacted in TLM, LLM, SLM and mule. The Pearson correlation results show that the width of disturbed area was significantly influenced by the diameter of logs and length of logs (p ˂ 0.05), but there is no significant correlation between soil disturbance width and slope. The results of analysis of MA showed that soil compaction was not related to logging method but sensitivity analysis of MA shows that LLM and TLM are both preferable to SLM.
5

Productivity, Cost and Environmental Damage of Four Logging Methods in Forestry of Northern Iran

Badraghi, Naghimeh 20 December 2013 (has links)
Increasing productivity, reducing cost, reducing soil damage, reducing the impact of harvesting on standing tree and regeneration are all very important objectives in ground skidding system in the management of the Hyrcanian forest. The research carried out to obtain these objectives included four logging methods, tree length method (TLM), long length method (LLM), short length method (SLM), and wood extraction by mule (mule) in northern Iran. In order to determine the cost per unit, time study techniques were used for each harvesting method, time study data are shifted to logarithmic data based on 10. On the basis of the developed models simulated, 11 skidding turns are simulated and the unit cost are estimated depending on the diameter of the log (DL), skidding distance (SD), and the winching distance (WD) for 11 different cycles with TLM, LLM and SLM. The results showed that on average, the net costs per extraction of one cubic meter of wood were 3.06, 5.69, 6.81 and 34.36 €/m3 in TLM, LLM, SLM and mule. The costs depending on diameter of log (DL), skidding distance (SD) and winching distance (WD) showed that the most economical alternative for Northern Iran is TLM. In the cut-to-length system, the costs of both alternatives LLM, SLM were significantly dependent on DL. , thus the result of this study suggests that as long as the diameter of the felled trees is less than 40 cm, the cut-to-length system is not an economical alternative, whilst the cut-to-length method can be applied for trees with a diameter more than 40 cm. Where diameters are more than 40 cm TLM it is more economical than SLM, however it was not significantly different. Depending on SD in short skidding distance SLM is preferable to LLM but in cases of long skidding distance LLM is more economical than SLM. The winching distance affect was not a factor on cost. To assess the damage on seedlings and standing trees a 100% inventory method was employed in pre-hauling and post-hauling, alongside of skidding trails, winching strips and mule hauling with a 12m width. To chose the best alternative depending on standing damage the Analysis of multiple criterial approval (MA) was applied. The amount of trees damaged by winching operation were 11.89% in TLM, 14.44% in LLM 27.59%, SLM and 0 stem and by skidding operation were 16.73%, 3.13% and 8.78% of total trees in TLM, LLM and SLM. In the winching area about 14%, 20%, 21% and 6 % of the total regeneration was damaged by TLM, LLM, SLM and mule and the skidding operation damaged 7.5% in TLM, 7.4 % LLM and 9.4% in SLM. The friendliest alternative to residual standing was mule but in manual method (where the wood extraction is done by skidder) MA showed that the best alternative depending on residual damage is LLM. To determine the degree of soil compaction a core sampling technique of bulk density was used. Soil samples collected from the horizontal face of a soil pit at 10 cm depth soil core, at 50m intervals on skid trials, in winching strips and control are (no vehicles pass) a soil sample was taken at 10m intervals in the hauling direction of the mule. In order to determine the post-harvesting extent of disturbance on skidding trails by skidding operations, the disturbed widths were measured at 50 m intervals along the skid trails. In the winching area, where the winched logs created a streak of displaced soil, the width of the displaced streak was measured at 5 m interval along the winching strip. In mule hauling operations the width of a streak created by a mule foot track was measured at 10 m intervals. To compare increased average bulk density between alternatives one way The ANOVA, Duncan test and Dunnett t-test with a 95 % confidence level were used. A General linear model was applied to relate the increasing bulk density and the slope gradient. To realize the correlation between the increment of soil bulk density and the slope gradient and the correlation between the soil compaction and soil moisture content (%) The Pearson correlation test was applied. To choose the best alternative (in manual method) a MA test was applied again. The bulk density on the skidding trail increased 51 % for 30 skidding turn, 35 % for 31 skidding turn (one unloaded and one loaded pass) and 46% for 41 skidding turn. Results of ANOVA (p < 0.05) show significant differences of bulk density between alternatives. Duncan test and the Dunnett t-test indicated that the increasing soil bulk density was not significant between control samples and winching strip of TLM and extraction by mule samples. The general linear modeling and Pearson correlation test results indicated that the slope gradient had an insignificant effect on soil compaction, whilst the Pearson test indicates a medium negative correlation between soil compaction and percentage of soil moisture. By ground-based winching operation 0.07%, 0.03%, 0.05% and 0.002% of the total area and by ground based skidding operation 1.21%, 1.67%, 0.81% and 0.00% of total area was disturbed and compacted in TLM, LLM, SLM and mule. The Pearson correlation results show that the width of disturbed area was significantly influenced by the diameter of logs and length of logs (p ˂ 0.05), but there is no significant correlation between soil disturbance width and slope. The results of analysis of MA showed that soil compaction was not related to logging method but sensitivity analysis of MA shows that LLM and TLM are both preferable to SLM.
6

Exploring Knowledge Vaults with ChatGPT : A Domain-Driven Natural Language Approach to Document-Based Answer Retrieval

Hammarström, Mathias January 2023 (has links)
Problemlösning är en viktig aspekt i många yrken. Inklusive fabriksmiljöer, där problem kan leda till minskad produktion eller till och med produktionsstopp. Denna studie fokuserar på en specifik domän: en massafabrik i samarbete med SCA Massa. Syftet med studien är att undersöka potentialen av ett frågebesvarande system för att förbättra arbetarnas förmåga att lösa problem genom att förse dem med möjliga lösningar baserat på en naturlig beskrivning av problemet. Detta uppnås genom att ge arbetarna ett naturligt språk gränssnitt till en stor mängd domänspecifika dokument. Mer specifikt så fungerar systemet genom att utöka ChatGPT med domänspecifika dokument som kontext för en fråga. De relevanta dokumenten hittas med hjälp av en retriever, som använder vektorrepresentationer för varje dokument och jämför sedan dokumentens vektorer med frågans vektor. Resultaten visar att system har genererat rätt svar 92% av tiden, felaktigt svar 5% av tiden och inget svar ges 3% av tiden. Slutsatsen av denna studie är att det implementerade frågebesvarande systemet är lovande, speciellt när det används av en expert eller skicklig arbetare som är mindre benägen att vilseledas av felaktiga svar. Dock, på grund av studiens begränsade omfattning så krävs ytterligare studier för att avgöra om systemet är redo att distribueras i verkliga miljöer. / Problem solving is a key aspect in many professions. Including a factory setting, where problems can cause the production to slow down or even halt completely. The specific domain for this project is a pulp factory setting in collaboration with SCA Pulp. This study explores the potential of a question-answering system to enhance workers ability to solve a problem by providing possible solutions from a natural language description of the problem. This is accomplished by giving workers a natural language interface to a large corpus of domain-specific documents. More specifically the system works by augmenting ChatGPT with domain specific documents as context for a question. The relevant documents are found using a retriever, which uses vector representations for each document, and then compares the documents vectors with the question vector. The result shows that the system has generated a correct answer 92% of the time, an incorrect answer 5% of the time and no answer was given 3% of the time. Conclusions drawn from this study is that the implemented question-answering system is promising, especially when used by an expert or skilled worker who is less likely to be misled by the incorrect answers. However, due to the study’s small scale further study is required to conclude that this system is ready to be deployed in real-world scenarios.
7

STL on Limited Local Memory (LLM) Multi-core Processors

January 2012 (has links)
abstract: Limited Local Memory (LLM) multicore architectures are promising powerefficient architectures will scalable memory hierarchy. In LLM multicores, each core can access only a small local memory. Accesses to a large shared global memory can only be made explicitly through Direct Memory Access (DMA) operations. Standard Template Library (STL) is a powerful programming tool and is widely used for software development. STLs provide dynamic data structures, algorithms, and iterators for vector, deque (double-ended queue), list, map (red-black tree), etc. Since the size of the local memory is limited in the cores of the LLM architecture, and data transfer is not automatically supported by hardware cache or OS, the usage of current STL implementation on LLM multicores is limited. Specifically, there is a hard limitation on the amount of data they can handle. In this article, we propose and implement a framework which manages the STL container classes on the local memory of LLM multicore architecture. Our proposal removes the data size limitation of the STL, and therefore improves the programmability on LLM multicore architectures with little change to the original program. Our implementation results in only about 12%-17% increase in static library code size and reasonable runtime overheads. / Dissertation/Thesis / M.S. Computer Science 2012
8

Integrating ChatGPT into the UX Design Process : Ideation and Prototyping with LLMs

Ekvall, Hubert, Winnberg, Patrik January 2023 (has links)
This paper presents an exploratory work on using Large Language Models (LLM) in User Experience (UX) design. Previous research shows that UX designers struggle to envision novel designs and to prototype with AI as a design material. We set out to investigate the question of how designers can be sensitized to LLMs, and their implications for the professional role of UX designers. Using autobiographical design, we develop a prototype of a digital workspace (the “PromptBoard”) for designing and prototyping chatbots utilizing ChatGPT. A design sprint workshop with six participants is performed, in an effort to answer the research questions by working with the PromptBoard. Discussions and participant-designed artifacts are analysed using thematic analysis. Findings include that participants are able to express design ideas and successfully create chatbots using the tool but express a conflicting sense of lacking creativity or ownership of the results. Implications to the field of UX design are discussed.
9

Educational Artificial Intelligent Chatbot:Teacher Assistant &amp; Study Buddy

Zarris, Dimitrios, Sozos, Stergios January 2023 (has links)
In the rapidly evolving landscape of artificial intelligence, the potential of large language models (LLMs) remains a focal point of exploration, especially in the domain of education. This research delves into the capabilities of AI-enhanced chatbots, with a spotlight on the "Teacher Assistant" &amp; "Study Buddy" approaches. The study highlights the role of AI in offering adaptive learning experiences and personalized recommendations. As educational institutions and platforms increasingly turn to AI-driven solutions, understanding the intricacies of how LLMs can be harnessed to create meaningful and accurate educational content becomes paramount.The research adopts a systematic and multi-faceted methodology. At its core, the study investigates the interplay between prompt construction, engineering techniques, and the resulting outputs of the LLM. Two primary methodologies are employed: the application of prompt structuring techniques and the introduction of advanced prompt engineering methods. The former involves a progressive application of techniques like persona and template, aiming to discern their individual and collective impacts on the LLM's outputs. The latter delves into more advanced techniques, such as the few-shot prompt and chain-of-thought prompt, to gauge their influence on the quality and characteristics of the LLM's responses. Complementing these is the "Study Buddy" approach, where curricula from domains like biology, mathematics, and physics are utilized as foundational materials for the experiments.The findings from this research are poised to have significant implications for the future of AI in education. By offering a comprehensive understanding of the variables that influence an LLM's performance, the study paves the way for the development of more refined and effective AI-driven educational tools. As educators and institutions grapple with the challenges of modern education, tools that can generate accurate, relevant, and diverse educational content can be invaluable. This thesis not only contributes to the academic understanding of LLMs and provides practical insights that can shape the future of AI-enhanced education, but as education continues to evolve, the findings underscore the need for ongoing exploration and refinement to fully leverage AI's benefits in the educational sector
10

Prompt engineering and its usability to improve modern psychology chatbots / Prompt engineering och dess användbarhet för att förbättra psykologichatbottar

Nordgren, Isak, E. Svensson, Gustaf January 2023 (has links)
As advancements in chatbots and Large Language Models (LLMs) such as GPT-3.5 and GPT-4 continue, their applications in diverse fields, including psychology, expand. This study investigates the effectiveness of LLMs optimized through prompt engineering, aiming to enhance their performance in psychological applications. To this end, two distinct versions of a GPT-3.5-based chatbot were developed: a version similar to the base model, and a version equipped with a more extensive system prompt detailing expected behavior. A panel of professional psychologists evaluated these models based on a predetermined set of questions, providing insight into their potential future use as psychological tools. Our results indicate that an overly prescriptive system prompt can unintentionally limit the versatility of the chatbot, making a careful balance in instruction specificity essential. Furthermore, while our study suggests that current LLMs such as GPT-3.5 are not capable of fully replacing human psychologists, they can provide valuable assistance in tasks such as basic question answering, consolation and validation, and triage. These findings provide a foundation for future research into the effective integration of LLMs in psychology and contribute valuable insights into the promising field of AI-assisted psychological services. / I takt med att framstegen inom chatbots och stora språkmodeller (LLMs) som GPT-3.5 och GPT-4 fortsätter utvidgas deras potentiella tillämpningar inom olika områden, inklusive psykologi. Denna studie undersöker effektiviteten av LLMs optimerade genom prompt engineering, med målet att förbättra deras prestanda inom psykologiska tillämpningar. I detta syfte utvecklades två distinkta versioner av en chatbot baserad på GPT-3.5: en version som liknar bas-modellen, och en version utrustad med en mer omfattande systemprompt som detaljerar förväntat beteende. En panel av professionella psykologer utvärderade dessa modeller baserat på en förbestämd uppsättning frågor, vilket ger inblick i deras potentiella framtida användning som psykologiska verktyg. Våra resultat tyder på att en överdrivet beskrivande systemprompt kan ofrivilligt begränsa chatbotens mångsidighet, vilket kräver en noggrann balans i specificiteten av prompten. Vidare antyder vår studie att nuvarande LLMs som GPT-3.5 inte kan ersätta mänskliga psykologer helt och hållet, men att de kan ge värdefull hjälp i uppgifter som grundläggande frågebesvaring, tröst och bekräftelse, samt triage. Dessa resultat ger en grund för framtida forskning om effektiv integration av LLMs inom psykologi och bidrar med värdefulla insikter till det lovande fältet av AI-assisterade psykologtjänster.

Page generated in 0.0481 seconds