21 |
Productivity, Cost and Environmental Damage of Four Logging Methods in Forestry of Northern IranBadraghi, Naghimeh 20 December 2013 (has links)
Increasing productivity, reducing cost, reducing soil damage, reducing the impact of harvesting on standing tree and regeneration are all very important objectives in ground skidding system in the management of the Hyrcanian forest. The research carried out to obtain these objectives included four logging methods, tree length method (TLM), long length method (LLM), short length method (SLM), and wood extraction by mule (mule) in northern Iran. In order to determine the cost per unit, time study techniques were used for each harvesting method, time study data are shifted to logarithmic data based on 10. On the basis of the developed models simulated, 11 skidding turns are simulated and the unit cost are estimated depending on the diameter of the log (DL), skidding distance (SD), and the winching distance (WD) for 11 different cycles with TLM, LLM and SLM.
The results showed that on average, the net costs per extraction of one cubic meter of wood were 3.06, 5.69, 6.81 and 34.36 €/m3 in TLM, LLM, SLM and mule. The costs depending on diameter of log (DL), skidding distance (SD) and winching distance (WD) showed that the most economical alternative for Northern Iran is TLM. In the cut-to-length system, the costs of both alternatives LLM, SLM were significantly dependent on DL. , thus the result of this study suggests that as long as the diameter of the felled trees is less than 40 cm, the cut-to-length system is not an economical alternative, whilst the cut-to-length method can be applied for trees with a diameter more than 40 cm. Where diameters are more than 40 cm TLM it is more economical than SLM, however it was not significantly different. Depending on SD in short skidding distance SLM is preferable to LLM but in cases of long skidding distance LLM is more economical than SLM. The winching distance affect was not a factor on cost.
To assess the damage on seedlings and standing trees a 100% inventory method was employed in pre-hauling and post-hauling, alongside of skidding trails, winching strips and mule hauling with a 12m width. To chose the best alternative depending on standing damage the Analysis of multiple criterial approval (MA) was applied. The amount of trees damaged by winching operation were 11.89% in TLM, 14.44% in LLM 27.59%, SLM and 0 stem and by skidding operation were 16.73%, 3.13% and 8.78% of total trees in TLM, LLM and SLM. In the winching area about 14%, 20%, 21% and 6 % of the total regeneration was damaged by TLM, LLM, SLM and mule and the skidding operation damaged 7.5% in TLM, 7.4 % LLM and 9.4% in SLM. The friendliest alternative to residual standing was mule but in manual method (where the wood extraction is done by skidder) MA showed that the best alternative depending on residual damage is LLM.
To determine the degree of soil compaction a core sampling technique of bulk density was used. Soil samples collected from the horizontal face of a soil pit at 10 cm depth soil core, at 50m intervals on skid trials, in winching strips and control are (no vehicles pass) a soil sample was taken at 10m intervals in the hauling direction of the mule. In order to determine the post-harvesting extent of disturbance on skidding trails by skidding operations, the disturbed widths were measured at 50 m intervals along the skid trails. In the winching area, where the winched logs created a streak of displaced soil, the width of the displaced streak was measured at 5 m interval along the winching strip. In mule hauling operations the width of a streak created by a mule foot track was measured at 10 m intervals.
To compare increased average bulk density between alternatives one way The ANOVA, Duncan test and Dunnett t-test with a 95 % confidence level were used. A General linear model was applied to relate the increasing bulk density and the slope gradient. To realize the correlation between the increment of soil bulk density and the slope gradient and the correlation between the soil compaction and soil moisture content (%) The Pearson correlation test was applied. To choose the best alternative (in manual method) a MA test was applied again. The bulk density on the skidding trail increased 51 % for 30 skidding turn, 35 % for 31 skidding turn (one unloaded and one loaded pass) and 46% for 41 skidding turn. Results of ANOVA (p < 0.05) show significant differences of bulk density between alternatives. Duncan test and the Dunnett t-test indicated that the increasing soil bulk density was not significant between control samples and winching strip of TLM and extraction by mule samples.
The general linear modeling and Pearson correlation test results indicated that the slope gradient had an insignificant effect on soil compaction, whilst the Pearson test indicates a medium negative correlation between soil compaction and percentage of soil moisture. By ground-based winching operation 0.07%, 0.03%, 0.05% and 0.002% of the total area and by ground based skidding operation 1.21%, 1.67%, 0.81% and 0.00% of total area was disturbed and compacted in TLM, LLM, SLM and mule. The Pearson correlation results show that the width of disturbed area was significantly influenced by the diameter of logs and length of logs (p ˂ 0.05), but there is no significant correlation between soil disturbance width and slope. The results of analysis of MA showed that soil compaction was not related to logging method but sensitivity analysis of MA shows that LLM and TLM are both preferable to SLM.
|
22 |
Exploring Knowledge Vaults with ChatGPT : A Domain-Driven Natural Language Approach to Document-Based Answer RetrievalHammarström, Mathias January 2023 (has links)
Problemlösning är en viktig aspekt i många yrken. Inklusive fabriksmiljöer, där problem kan leda till minskad produktion eller till och med produktionsstopp. Denna studie fokuserar på en specifik domän: en massafabrik i samarbete med SCA Massa. Syftet med studien är att undersöka potentialen av ett frågebesvarande system för att förbättra arbetarnas förmåga att lösa problem genom att förse dem med möjliga lösningar baserat på en naturlig beskrivning av problemet. Detta uppnås genom att ge arbetarna ett naturligt språk gränssnitt till en stor mängd domänspecifika dokument. Mer specifikt så fungerar systemet genom att utöka ChatGPT med domänspecifika dokument som kontext för en fråga. De relevanta dokumenten hittas med hjälp av en retriever, som använder vektorrepresentationer för varje dokument och jämför sedan dokumentens vektorer med frågans vektor. Resultaten visar att system har genererat rätt svar 92% av tiden, felaktigt svar 5% av tiden och inget svar ges 3% av tiden. Slutsatsen av denna studie är att det implementerade frågebesvarande systemet är lovande, speciellt när det används av en expert eller skicklig arbetare som är mindre benägen att vilseledas av felaktiga svar. Dock, på grund av studiens begränsade omfattning så krävs ytterligare studier för att avgöra om systemet är redo att distribueras i verkliga miljöer. / Problem solving is a key aspect in many professions. Including a factory setting, where problems can cause the production to slow down or even halt completely. The specific domain for this project is a pulp factory setting in collaboration with SCA Pulp. This study explores the potential of a question-answering system to enhance workers ability to solve a problem by providing possible solutions from a natural language description of the problem. This is accomplished by giving workers a natural language interface to a large corpus of domain-specific documents. More specifically the system works by augmenting ChatGPT with domain specific documents as context for a question. The relevant documents are found using a retriever, which uses vector representations for each document, and then compares the documents vectors with the question vector. The result shows that the system has generated a correct answer 92% of the time, an incorrect answer 5% of the time and no answer was given 3% of the time. Conclusions drawn from this study is that the implemented question-answering system is promising, especially when used by an expert or skilled worker who is less likely to be misled by the incorrect answers. However, due to the study’s small scale further study is required to conclude that this system is ready to be deployed in real-world scenarios.
|
23 |
STL on Limited Local Memory (LLM) Multi-core ProcessorsJanuary 2012 (has links)
abstract: Limited Local Memory (LLM) multicore architectures are promising powerefficient architectures will scalable memory hierarchy. In LLM multicores, each core can access only a small local memory. Accesses to a large shared global memory can only be made explicitly through Direct Memory Access (DMA) operations. Standard Template Library (STL) is a powerful programming tool and is widely used for software development. STLs provide dynamic data structures, algorithms, and iterators for vector, deque (double-ended queue), list, map (red-black tree), etc. Since the size of the local memory is limited in the cores of the LLM architecture, and data transfer is not automatically supported by hardware cache or OS, the usage of current STL implementation on LLM multicores is limited. Specifically, there is a hard limitation on the amount of data they can handle. In this article, we propose and implement a framework which manages the STL container classes on the local memory of LLM multicore architecture. Our proposal removes the data size limitation of the STL, and therefore improves the programmability on LLM multicore architectures with little change to the original program. Our implementation results in only about 12%-17% increase in static library code size and reasonable runtime overheads. / Dissertation/Thesis / M.S. Computer Science 2012
|
24 |
Integrating ChatGPT into the UX Design Process : Ideation and Prototyping with LLMsEkvall, Hubert, Winnberg, Patrik January 2023 (has links)
This paper presents an exploratory work on using Large Language Models (LLM) in User Experience (UX) design. Previous research shows that UX designers struggle to envision novel designs and to prototype with AI as a design material. We set out to investigate the question of how designers can be sensitized to LLMs, and their implications for the professional role of UX designers. Using autobiographical design, we develop a prototype of a digital workspace (the “PromptBoard”) for designing and prototyping chatbots utilizing ChatGPT. A design sprint workshop with six participants is performed, in an effort to answer the research questions by working with the PromptBoard. Discussions and participant-designed artifacts are analysed using thematic analysis. Findings include that participants are able to express design ideas and successfully create chatbots using the tool but express a conflicting sense of lacking creativity or ownership of the results. Implications to the field of UX design are discussed.
|
25 |
Educational Artificial Intelligent Chatbot:Teacher Assistant & Study BuddyZarris, Dimitrios, Sozos, Stergios January 2023 (has links)
In the rapidly evolving landscape of artificial intelligence, the potential of large language models (LLMs) remains a focal point of exploration, especially in the domain of education. This research delves into the capabilities of AI-enhanced chatbots, with a spotlight on the "Teacher Assistant" & "Study Buddy" approaches. The study highlights the role of AI in offering adaptive learning experiences and personalized recommendations. As educational institutions and platforms increasingly turn to AI-driven solutions, understanding the intricacies of how LLMs can be harnessed to create meaningful and accurate educational content becomes paramount.The research adopts a systematic and multi-faceted methodology. At its core, the study investigates the interplay between prompt construction, engineering techniques, and the resulting outputs of the LLM. Two primary methodologies are employed: the application of prompt structuring techniques and the introduction of advanced prompt engineering methods. The former involves a progressive application of techniques like persona and template, aiming to discern their individual and collective impacts on the LLM's outputs. The latter delves into more advanced techniques, such as the few-shot prompt and chain-of-thought prompt, to gauge their influence on the quality and characteristics of the LLM's responses. Complementing these is the "Study Buddy" approach, where curricula from domains like biology, mathematics, and physics are utilized as foundational materials for the experiments.The findings from this research are poised to have significant implications for the future of AI in education. By offering a comprehensive understanding of the variables that influence an LLM's performance, the study paves the way for the development of more refined and effective AI-driven educational tools. As educators and institutions grapple with the challenges of modern education, tools that can generate accurate, relevant, and diverse educational content can be invaluable. This thesis not only contributes to the academic understanding of LLMs and provides practical insights that can shape the future of AI-enhanced education, but as education continues to evolve, the findings underscore the need for ongoing exploration and refinement to fully leverage AI's benefits in the educational sector
|
26 |
ChatGPT: Ett hjälpmedel eller ett fuskverktyg? : En översiktsstudie om potentiella möjligheter och utmaningar med att integrera chattverktyg i undervisningen. / CHATGPT: A help or a cheat tool? : An overview study of the potential opportunities and challenges of integrating chat tools into teaching.Plantinger, Hanna January 2024 (has links)
The aim of this study is to examine how teachers can utilize chatbots in education to continue using writing assignments as assessment tools. Through a Scoping review, various strategies are presented by analyzing empirical material based on a SWOT analysis. The study seeks to address the following research questions: How can chatbots be used to enhance teaching and learning in social studies? And What measures are emphasized to prevent potential challenges regarding the relationship between writing assignments and chatbots? The results section of the paper is structured as a categorical overview based on the didactic questions: what, how, and why? Based on the results, eight strategies are identified: Chatbots as co-creators, Student-active exercises, Teacher assistant, Formality tool, Note-taking, Individualized lesson planning, Critical thinking, Reverse search. Overall, all strategies aimed to optimize both students' and teachers' work. From a student perspective, chatbots could serve as a support to individualize the learning process based on the student's own conditions. From a teacher perspective, chatbots could optimize teachers' work and reduce workload. The results indicates that teachers can view chatbots as an additional resource during class time, a brainstorming tool during the planning phase, and an aid through feedback and professional development during the evaluation phase. The results also highlight several potential challenges to consider. The conclusion of this study is that writing assignments can still serve important functions in schools, though in a somewhat different manner than how we typically have employed them historically. Chatbots can serve as a tool to meet the guidelines issued by the Swedish National Agency for Education in the national curriculum for social studies at the high school level. Based on the internal factors presented, there is a need for a willingness to develop and change traditional work methods, and the perception of what writing assignments should generate needs to change. All the strategies presented can either be seen as support during the writing process itself or as a supplementary assessment method for writing assignments. Based on the external factors, it is evident that the entire school as an organization needs to be involved for a successful integration of chatbots into education.
|
27 |
From Bytecode to Safety : Decompiling Smart Contracts for Vulnerability AnalysisDarwish, Malek January 2024 (has links)
This thesis investigated the use of Large Language Models (LLMs) for vulnerability analysis of decompiled smart contracts. A controlled experiment was conducted in which an automated system was developed to decompile smart contracts using two decompilers: Dedaub and Heimdall-rs, and subsequently analyze them using three LLMs: OpenAI’s GPT-4 and GPT-3.5, as well as Meta’s CodeLlama. The study focuses on assessing the effectiveness of the LLMs at identifying a range of vulnerabilities. The evaluation method included the collection and comparative analysis of performance and evaluative metrics such as the precision, recall and F1-scores. Our results show the LLM-decompiler pairing of Dedaub and GPT-4 to exhibit impressive detection capabilities across a range of vulnerabilities while failing to detect some vulnerabilities at which CodeLlama excelled. We demonstrated the potential of LLMs to improve smart contract security and sets the stage for future research to further expand on this domain.
|
28 |
Empathetic AI for Enhanced Workplace Engagement / Empatisk AI för ökat arbetsplatsengagemangJusic, Samuel, Klockars, Love, Melinder, Anthony, Uddin, Anik, Wadman, Isak, Zanetti, Marcus January 2024 (has links)
This report outlines the research focused on finding the system design for Happymaker AI, a large language model with a mission to promote well-being at workplaces through daily interactions. The study includes a market analysis of relevant system components, such as database, cloud storage, cloud computing service and large language model, as well as the development of a prototype. Despite facing challenges including limited training data and resource constraints, the prototype was developed using the Llama 2 13B model which was quantized to 8-bits and fine-tuned using LoRA. Through research and prototyping of Happymaker AI, recommendations for the system design were established. These findings provide a foundation for the further development of an ethical AI system, specifically tailored for user data security and scalability. The findings also introduce a new perspective on empathy and personal well-being within the AI field, emphasizing the importance of integrating human-centric values into technological advancements. / Denna rapport skildrar forskningen som fokuserade på att hitta systemdesignen för Happymaker AI, en stor språkmodell med uppdraget att främja välmående på arbetsplatser genom dagliga interaktioner. Studien inkluderar en marknadsanalys av relevanta systemkomponenter såsom databas, molnlagring, molntjänster och en stor språkmodell, samt utvecklingen av en prototyp. Trots utmaningar, inklusive begränsad träningsdata och resursbegränsningar utvecklades prototypen med modellen Llama 2 13B som kvantiserades till 8-bit och tränades med LoRA. Genom forskning och prototypframtagning av Happymaker AI fastställdes rekommendationer för systemdesignen. Resultaten av studien ger en grund för vidareutveckling av ett etiskt AI-system som är anpassat för användardatasäkerhet och skalbarhet. Samtidigt introduceras ett nytt perspektiv på empati och personligt välmående inom AI-fältet, vilket betonar vikten av att integrera människocentrerade värderingar i teknologiska framsteg.
|
29 |
Towards On-Premise Hosted Language Models for Generating Documentation in Programming ProjectsHedlund, Ludvig January 2024 (has links)
Documentation for programming projects can vary both in quality and availability. The availability of documentation can vary more for a closed working environment, since fewer developers will read the documentation. Documenting programming projects can be demanding on worker hours and unappreciated among developers. It is a common conception that developers rather invest time on developing a project than documenting a project, and making the documentation process more effective would benefit developers. To move towards a more automated process of writing documentation, this work generated documentation for repositories which attempts to summarize the repositories in their use cases and functionalities. Two different implementations are created to generate documentation using an on-premise hosted large language model (LLM) as a tool. First, the embedded solution processes all available code in a project and creates the documentation based on multiple summarizations of files and folders. Second, the RAG solution attempts to use only the most important parts of the code and lets the LLM create the documentation on a smaller set of the codebase. The results show that generating documentation is possible, but unreliable and must be controlled by a person with knowledge about the codebase. The embedded solution seems to be more reliable and produce better results, but is more costly compared to the RAG solution.
|
30 |
Chatting Over Course Material : The Role of Retrieval Augmented Generation Systems in Enhancing Academic Chatbots.Monteiro, Hélder January 2024 (has links)
Large Language Models (LLMs) have the potential to enhance learning among students. These tools can be used in chatbot systems allowing students to ask questions about course material, in particular when plugged with the so-called Retrieval Augmented Systems (RAGs). RAGs allow LLMs to access external knowledge, which improves tailored responses when used in a chatbot system. This thesis studies different RAGs through an experimentation approach where each RAG is constructed using different sets of parameters and tools, including small and large language models. We conclude by suggesting which of the RAGs best adapts to high school courses in Physics and undergraduate courses in Mathematics, such that the retrieval systems together with the LLMs are able to return the most relevant answers from provided course material. We conclude with two RAG-powered LLM with different configurations performing over 64% accuracy in physics and 66% in mathematics.
|
Page generated in 0.0823 seconds