21 |
Productivity, Cost and Environmental Damage of Four Logging Methods in Forestry of Northern IranBadraghi, Naghimeh 04 July 2014 (has links) (PDF)
Increasing productivity, reducing cost, reducing soil damage, reducing the impact of harvesting on standing tree and regeneration are all very important objectives in ground skidding system in the management of the Hyrcanian forest. The research carried out to obtain these objectives included four logging methods, tree length method (TLM), long length method (LLM), short length method (SLM), and wood extraction by mule (mule) in northern Iran. In order to determine the cost per unit, time study techniques were used for each harvesting method, time study data are shifted to logarithmic data based on 10. On the basis of the developed models simulated, 11 skidding turns are simulated and the unit cost are estimated depending on the diameter of the log (DL), skidding distance (SD), and the winching distance (WD) for 11 different cycles with TLM, LLM and SLM.
The results showed that on average, the net costs per extraction of one cubic meter of wood were 3.06, 5.69, 6.81 and 34.36 €/m3 in TLM, LLM, SLM and mule. The costs depending on diameter of log (DL), skidding distance (SD) and winching distance (WD) showed that the most economical alternative for Northern Iran is TLM. In the cut-to-length system, the costs of both alternatives LLM, SLM were significantly dependent on DL. , thus the result of this study suggests that as long as the diameter of the felled trees is less than 40 cm, the cut-to-length system is not an economical alternative, whilst the cut-to-length method can be applied for trees with a diameter more than 40 cm. Where diameters are more than 40 cm TLM it is more economical than SLM, however it was not significantly different. Depending on SD in short skidding distance SLM is preferable to LLM but in cases of long skidding distance LLM is more economical than SLM. The winching distance affect was not a factor on cost.
To assess the damage on seedlings and standing trees a 100% inventory method was employed in pre-hauling and post-hauling, alongside of skidding trails, winching strips and mule hauling with a 12m width. To chose the best alternative depending on standing damage the Analysis of multiple criterial approval (MA) was applied. The amount of trees damaged by winching operation were 11.89% in TLM, 14.44% in LLM 27.59%, SLM and 0 stem and by skidding operation were 16.73%, 3.13% and 8.78% of total trees in TLM, LLM and SLM. In the winching area about 14%, 20%, 21% and 6 % of the total regeneration was damaged by TLM, LLM, SLM and mule and the skidding operation damaged 7.5% in TLM, 7.4 % LLM and 9.4% in SLM. The friendliest alternative to residual standing was mule but in manual method (where the wood extraction is done by skidder) MA showed that the best alternative depending on residual damage is LLM.
To determine the degree of soil compaction a core sampling technique of bulk density was used. Soil samples collected from the horizontal face of a soil pit at 10 cm depth soil core, at 50m intervals on skid trials, in winching strips and control are (no vehicles pass) a soil sample was taken at 10m intervals in the hauling direction of the mule. In order to determine the post-harvesting extent of disturbance on skidding trails by skidding operations, the disturbed widths were measured at 50 m intervals along the skid trails. In the winching area, where the winched logs created a streak of displaced soil, the width of the displaced streak was measured at 5 m interval along the winching strip. In mule hauling operations the width of a streak created by a mule foot track was measured at 10 m intervals.
To compare increased average bulk density between alternatives one way The ANOVA, Duncan test and Dunnett t-test with a 95 % confidence level were used. A General linear model was applied to relate the increasing bulk density and the slope gradient. To realize the correlation between the increment of soil bulk density and the slope gradient and the correlation between the soil compaction and soil moisture content (%) The Pearson correlation test was applied. To choose the best alternative (in manual method) a MA test was applied again. The bulk density on the skidding trail increased 51 % for 30 skidding turn, 35 % for 31 skidding turn (one unloaded and one loaded pass) and 46% for 41 skidding turn. Results of ANOVA (p < 0.05) show significant differences of bulk density between alternatives. Duncan test and the Dunnett t-test indicated that the increasing soil bulk density was not significant between control samples and winching strip of TLM and extraction by mule samples.
The general linear modeling and Pearson correlation test results indicated that the slope gradient had an insignificant effect on soil compaction, whilst the Pearson test indicates a medium negative correlation between soil compaction and percentage of soil moisture. By ground-based winching operation 0.07%, 0.03%, 0.05% and 0.002% of the total area and by ground based skidding operation 1.21%, 1.67%, 0.81% and 0.00% of total area was disturbed and compacted in TLM, LLM, SLM and mule. The Pearson correlation results show that the width of disturbed area was significantly influenced by the diameter of logs and length of logs (p ˂ 0.05), but there is no significant correlation between soil disturbance width and slope. The results of analysis of MA showed that soil compaction was not related to logging method but sensitivity analysis of MA shows that LLM and TLM are both preferable to SLM.
|
22 |
Productivity, Cost and Environmental Damage of Four Logging Methods in Forestry of Northern IranBadraghi, Naghimeh 20 December 2013 (has links)
Increasing productivity, reducing cost, reducing soil damage, reducing the impact of harvesting on standing tree and regeneration are all very important objectives in ground skidding system in the management of the Hyrcanian forest. The research carried out to obtain these objectives included four logging methods, tree length method (TLM), long length method (LLM), short length method (SLM), and wood extraction by mule (mule) in northern Iran. In order to determine the cost per unit, time study techniques were used for each harvesting method, time study data are shifted to logarithmic data based on 10. On the basis of the developed models simulated, 11 skidding turns are simulated and the unit cost are estimated depending on the diameter of the log (DL), skidding distance (SD), and the winching distance (WD) for 11 different cycles with TLM, LLM and SLM.
The results showed that on average, the net costs per extraction of one cubic meter of wood were 3.06, 5.69, 6.81 and 34.36 €/m3 in TLM, LLM, SLM and mule. The costs depending on diameter of log (DL), skidding distance (SD) and winching distance (WD) showed that the most economical alternative for Northern Iran is TLM. In the cut-to-length system, the costs of both alternatives LLM, SLM were significantly dependent on DL. , thus the result of this study suggests that as long as the diameter of the felled trees is less than 40 cm, the cut-to-length system is not an economical alternative, whilst the cut-to-length method can be applied for trees with a diameter more than 40 cm. Where diameters are more than 40 cm TLM it is more economical than SLM, however it was not significantly different. Depending on SD in short skidding distance SLM is preferable to LLM but in cases of long skidding distance LLM is more economical than SLM. The winching distance affect was not a factor on cost.
To assess the damage on seedlings and standing trees a 100% inventory method was employed in pre-hauling and post-hauling, alongside of skidding trails, winching strips and mule hauling with a 12m width. To chose the best alternative depending on standing damage the Analysis of multiple criterial approval (MA) was applied. The amount of trees damaged by winching operation were 11.89% in TLM, 14.44% in LLM 27.59%, SLM and 0 stem and by skidding operation were 16.73%, 3.13% and 8.78% of total trees in TLM, LLM and SLM. In the winching area about 14%, 20%, 21% and 6 % of the total regeneration was damaged by TLM, LLM, SLM and mule and the skidding operation damaged 7.5% in TLM, 7.4 % LLM and 9.4% in SLM. The friendliest alternative to residual standing was mule but in manual method (where the wood extraction is done by skidder) MA showed that the best alternative depending on residual damage is LLM.
To determine the degree of soil compaction a core sampling technique of bulk density was used. Soil samples collected from the horizontal face of a soil pit at 10 cm depth soil core, at 50m intervals on skid trials, in winching strips and control are (no vehicles pass) a soil sample was taken at 10m intervals in the hauling direction of the mule. In order to determine the post-harvesting extent of disturbance on skidding trails by skidding operations, the disturbed widths were measured at 50 m intervals along the skid trails. In the winching area, where the winched logs created a streak of displaced soil, the width of the displaced streak was measured at 5 m interval along the winching strip. In mule hauling operations the width of a streak created by a mule foot track was measured at 10 m intervals.
To compare increased average bulk density between alternatives one way The ANOVA, Duncan test and Dunnett t-test with a 95 % confidence level were used. A General linear model was applied to relate the increasing bulk density and the slope gradient. To realize the correlation between the increment of soil bulk density and the slope gradient and the correlation between the soil compaction and soil moisture content (%) The Pearson correlation test was applied. To choose the best alternative (in manual method) a MA test was applied again. The bulk density on the skidding trail increased 51 % for 30 skidding turn, 35 % for 31 skidding turn (one unloaded and one loaded pass) and 46% for 41 skidding turn. Results of ANOVA (p < 0.05) show significant differences of bulk density between alternatives. Duncan test and the Dunnett t-test indicated that the increasing soil bulk density was not significant between control samples and winching strip of TLM and extraction by mule samples.
The general linear modeling and Pearson correlation test results indicated that the slope gradient had an insignificant effect on soil compaction, whilst the Pearson test indicates a medium negative correlation between soil compaction and percentage of soil moisture. By ground-based winching operation 0.07%, 0.03%, 0.05% and 0.002% of the total area and by ground based skidding operation 1.21%, 1.67%, 0.81% and 0.00% of total area was disturbed and compacted in TLM, LLM, SLM and mule. The Pearson correlation results show that the width of disturbed area was significantly influenced by the diameter of logs and length of logs (p ˂ 0.05), but there is no significant correlation between soil disturbance width and slope. The results of analysis of MA showed that soil compaction was not related to logging method but sensitivity analysis of MA shows that LLM and TLM are both preferable to SLM.
|
23 |
Exploring Knowledge Vaults with ChatGPT : A Domain-Driven Natural Language Approach to Document-Based Answer RetrievalHammarström, Mathias January 2023 (has links)
Problemlösning är en viktig aspekt i många yrken. Inklusive fabriksmiljöer, där problem kan leda till minskad produktion eller till och med produktionsstopp. Denna studie fokuserar på en specifik domän: en massafabrik i samarbete med SCA Massa. Syftet med studien är att undersöka potentialen av ett frågebesvarande system för att förbättra arbetarnas förmåga att lösa problem genom att förse dem med möjliga lösningar baserat på en naturlig beskrivning av problemet. Detta uppnås genom att ge arbetarna ett naturligt språk gränssnitt till en stor mängd domänspecifika dokument. Mer specifikt så fungerar systemet genom att utöka ChatGPT med domänspecifika dokument som kontext för en fråga. De relevanta dokumenten hittas med hjälp av en retriever, som använder vektorrepresentationer för varje dokument och jämför sedan dokumentens vektorer med frågans vektor. Resultaten visar att system har genererat rätt svar 92% av tiden, felaktigt svar 5% av tiden och inget svar ges 3% av tiden. Slutsatsen av denna studie är att det implementerade frågebesvarande systemet är lovande, speciellt när det används av en expert eller skicklig arbetare som är mindre benägen att vilseledas av felaktiga svar. Dock, på grund av studiens begränsade omfattning så krävs ytterligare studier för att avgöra om systemet är redo att distribueras i verkliga miljöer. / Problem solving is a key aspect in many professions. Including a factory setting, where problems can cause the production to slow down or even halt completely. The specific domain for this project is a pulp factory setting in collaboration with SCA Pulp. This study explores the potential of a question-answering system to enhance workers ability to solve a problem by providing possible solutions from a natural language description of the problem. This is accomplished by giving workers a natural language interface to a large corpus of domain-specific documents. More specifically the system works by augmenting ChatGPT with domain specific documents as context for a question. The relevant documents are found using a retriever, which uses vector representations for each document, and then compares the documents vectors with the question vector. The result shows that the system has generated a correct answer 92% of the time, an incorrect answer 5% of the time and no answer was given 3% of the time. Conclusions drawn from this study is that the implemented question-answering system is promising, especially when used by an expert or skilled worker who is less likely to be misled by the incorrect answers. However, due to the study’s small scale further study is required to conclude that this system is ready to be deployed in real-world scenarios.
|
24 |
Enhancing agent learning through world dynamics modelingSun, Zhiyuan 08 1900 (has links)
Le développement rapide de l’intelligence artificielle (IA), allant des modèles comme BERT
aux modèles de fondation à grande échelle, illustre la croissance exponentielle de la taille et
des capacités des modèles, stimulée par les avancées en puissance de calcul et la disponibilité
des données. Les modèles de fondation, qui tirent parti de l’apprentissage auto-supervisé
sur d’énormes ensembles de données non étiquetées, ont montré une polyvalence remarquable dans une large gamme de tâches, du traitement du langage à la représentation des
connaissances. Cependant, leur dépendance à des données de grande envergure, principalement issues d’Internet, introduit un « écart de connaissances »—un décalage entre les
connaissances généralisées acquises pendant l’entraînement et les connaissances spécialisées
nécessaires pour des domaines spécifiques. Cet écart est principalement causé par des informations insuffisantes, trompeuses ou superficielles disponibles lors de l’entraînement, ce
qui peut mener à des sorties peu fiables, surtout dans des contextes de données rares ou de
mauvaise qualité.
Pour relever ce défi, nous introduisons le cadre Discover, Verify, and Evolve (DiVE).
DiVE est conçu pour améliorer la compréhension des modèles de fondation en les dotant de
connaissances profondes et adaptées aux tâches en aval.
Le cadre fonctionne en trois étapes :
∙ Découvrir l’information : Extraire des informations pertinentes et utiles pour pallier le manque de données qui limite la compréhension des modèles dans des domaines
spécialisés.
∙ Vérifier l’information : Valider les informations recueillies afin de filtrer les inexactitudes et les biais, garantissant ainsi que seules des connaissances fiables sont
retenues.
∙ Faire évoluer l’information : Affiner et développer les informations vérifiées pour
obtenir des connaissances plus approfondies, améliorant ainsi la capacité du modèle
à traiter des requêtes complexes et à performer avec précision dans des tâches spécialisées.
En s’attaquant aux causes profondes de l’écart de connaissances, DiVE aide les modèles
de fondation à passer d’une compréhension générale à une expertise spécialisée, comblant
le fossé entre formation et application. Cette approche améliore la précision des modèles à
travers les domaines et renforce leurs capacités de prise de décision. Dans cette thèse, nous
démontrons l’efficacité de DiVE à travers des évaluations empiriques, soulignant son potentiel
à améliorer l’adaptabilité et la robustesse des modèles de fondation dans des scénarios réels. / The rapid evolution of artificial intelligence (AI) from models like BERT to large-scale foundation models illustrates the exponential growth in model sizes and capabilities, driven by
advances in computational power and data availability. Foundation models, which leverage self-supervised learning on vast, unlabelled datasets, have shown remarkable versatility
across a wide range of tasks, from language processing to knowledge representation. However,
their reliance on large-scale, predominantly internet-sourced data introduces a “knowledge
ga”—a mismatch between the generalized knowledge acquired during training and the specialized knowledge required for specific domains. This gap is primarily caused by insufficient,
misleading, or superficial information available during training, which can lead to unreliable
outputs, especially in low-data or poor-data settings.
To address this challenge, we introduce the Discover, Verify, and Evolve (DiVE) framework. DiVE is designed to enhance the understanding of foundation models by equipping
them with deep, tailored knowledge about downstream tasks. The framework operates in
three stages:
∙ Discover the Information: Extract relevant and useful information to address the
lack of data that limits the model’s understanding of specialized domains.
∙ Verify the Information: Validate the gathered information to filter out inaccuracies and biases, ensuring only reliable knowledge is retained.
∙ Evolve the Information: Refine and expand on verified information to gain deeper
insights, improving the model’s ability to handle complex queries and perform accurately in specialized tasks.
By addressing the root causes of the knowledge gap, DiVE helps foundation models transition from general understanding to specialized expertise, bridging the gap between training and application. This approach enhances model accuracy across domains and improves
decision-making capabilities. In this thesis, we demonstrate the efficacy of DiVE through
empirical evaluations, highlighting its potential to enhance the adaptability and robustness
of foundation models in real-world scenarios.
|
25 |
STL on Limited Local Memory (LLM) Multi-core ProcessorsJanuary 2012 (has links)
abstract: Limited Local Memory (LLM) multicore architectures are promising powerefficient architectures will scalable memory hierarchy. In LLM multicores, each core can access only a small local memory. Accesses to a large shared global memory can only be made explicitly through Direct Memory Access (DMA) operations. Standard Template Library (STL) is a powerful programming tool and is widely used for software development. STLs provide dynamic data structures, algorithms, and iterators for vector, deque (double-ended queue), list, map (red-black tree), etc. Since the size of the local memory is limited in the cores of the LLM architecture, and data transfer is not automatically supported by hardware cache or OS, the usage of current STL implementation on LLM multicores is limited. Specifically, there is a hard limitation on the amount of data they can handle. In this article, we propose and implement a framework which manages the STL container classes on the local memory of LLM multicore architecture. Our proposal removes the data size limitation of the STL, and therefore improves the programmability on LLM multicore architectures with little change to the original program. Our implementation results in only about 12%-17% increase in static library code size and reasonable runtime overheads. / Dissertation/Thesis / M.S. Computer Science 2012
|
26 |
Integrating ChatGPT into the UX Design Process : Ideation and Prototyping with LLMsEkvall, Hubert, Winnberg, Patrik January 2023 (has links)
This paper presents an exploratory work on using Large Language Models (LLM) in User Experience (UX) design. Previous research shows that UX designers struggle to envision novel designs and to prototype with AI as a design material. We set out to investigate the question of how designers can be sensitized to LLMs, and their implications for the professional role of UX designers. Using autobiographical design, we develop a prototype of a digital workspace (the “PromptBoard”) for designing and prototyping chatbots utilizing ChatGPT. A design sprint workshop with six participants is performed, in an effort to answer the research questions by working with the PromptBoard. Discussions and participant-designed artifacts are analysed using thematic analysis. Findings include that participants are able to express design ideas and successfully create chatbots using the tool but express a conflicting sense of lacking creativity or ownership of the results. Implications to the field of UX design are discussed.
|
27 |
Educational Artificial Intelligent Chatbot:Teacher Assistant & Study BuddyZarris, Dimitrios, Sozos, Stergios January 2023 (has links)
In the rapidly evolving landscape of artificial intelligence, the potential of large language models (LLMs) remains a focal point of exploration, especially in the domain of education. This research delves into the capabilities of AI-enhanced chatbots, with a spotlight on the "Teacher Assistant" & "Study Buddy" approaches. The study highlights the role of AI in offering adaptive learning experiences and personalized recommendations. As educational institutions and platforms increasingly turn to AI-driven solutions, understanding the intricacies of how LLMs can be harnessed to create meaningful and accurate educational content becomes paramount.The research adopts a systematic and multi-faceted methodology. At its core, the study investigates the interplay between prompt construction, engineering techniques, and the resulting outputs of the LLM. Two primary methodologies are employed: the application of prompt structuring techniques and the introduction of advanced prompt engineering methods. The former involves a progressive application of techniques like persona and template, aiming to discern their individual and collective impacts on the LLM's outputs. The latter delves into more advanced techniques, such as the few-shot prompt and chain-of-thought prompt, to gauge their influence on the quality and characteristics of the LLM's responses. Complementing these is the "Study Buddy" approach, where curricula from domains like biology, mathematics, and physics are utilized as foundational materials for the experiments.The findings from this research are poised to have significant implications for the future of AI in education. By offering a comprehensive understanding of the variables that influence an LLM's performance, the study paves the way for the development of more refined and effective AI-driven educational tools. As educators and institutions grapple with the challenges of modern education, tools that can generate accurate, relevant, and diverse educational content can be invaluable. This thesis not only contributes to the academic understanding of LLMs and provides practical insights that can shape the future of AI-enhanced education, but as education continues to evolve, the findings underscore the need for ongoing exploration and refinement to fully leverage AI's benefits in the educational sector
|
28 |
ChatGPT: Ett hjälpmedel eller ett fuskverktyg? : En översiktsstudie om potentiella möjligheter och utmaningar med att integrera chattverktyg i undervisningen. / CHATGPT: A help or a cheat tool? : An overview study of the potential opportunities and challenges of integrating chat tools into teaching.Plantinger, Hanna January 2024 (has links)
The aim of this study is to examine how teachers can utilize chatbots in education to continue using writing assignments as assessment tools. Through a Scoping review, various strategies are presented by analyzing empirical material based on a SWOT analysis. The study seeks to address the following research questions: How can chatbots be used to enhance teaching and learning in social studies? And What measures are emphasized to prevent potential challenges regarding the relationship between writing assignments and chatbots? The results section of the paper is structured as a categorical overview based on the didactic questions: what, how, and why? Based on the results, eight strategies are identified: Chatbots as co-creators, Student-active exercises, Teacher assistant, Formality tool, Note-taking, Individualized lesson planning, Critical thinking, Reverse search. Overall, all strategies aimed to optimize both students' and teachers' work. From a student perspective, chatbots could serve as a support to individualize the learning process based on the student's own conditions. From a teacher perspective, chatbots could optimize teachers' work and reduce workload. The results indicates that teachers can view chatbots as an additional resource during class time, a brainstorming tool during the planning phase, and an aid through feedback and professional development during the evaluation phase. The results also highlight several potential challenges to consider. The conclusion of this study is that writing assignments can still serve important functions in schools, though in a somewhat different manner than how we typically have employed them historically. Chatbots can serve as a tool to meet the guidelines issued by the Swedish National Agency for Education in the national curriculum for social studies at the high school level. Based on the internal factors presented, there is a need for a willingness to develop and change traditional work methods, and the perception of what writing assignments should generate needs to change. All the strategies presented can either be seen as support during the writing process itself or as a supplementary assessment method for writing assignments. Based on the external factors, it is evident that the entire school as an organization needs to be involved for a successful integration of chatbots into education.
|
29 |
From Bytecode to Safety : Decompiling Smart Contracts for Vulnerability AnalysisDarwish, Malek January 2024 (has links)
This thesis investigated the use of Large Language Models (LLMs) for vulnerability analysis of decompiled smart contracts. A controlled experiment was conducted in which an automated system was developed to decompile smart contracts using two decompilers: Dedaub and Heimdall-rs, and subsequently analyze them using three LLMs: OpenAI’s GPT-4 and GPT-3.5, as well as Meta’s CodeLlama. The study focuses on assessing the effectiveness of the LLMs at identifying a range of vulnerabilities. The evaluation method included the collection and comparative analysis of performance and evaluative metrics such as the precision, recall and F1-scores. Our results show the LLM-decompiler pairing of Dedaub and GPT-4 to exhibit impressive detection capabilities across a range of vulnerabilities while failing to detect some vulnerabilities at which CodeLlama excelled. We demonstrated the potential of LLMs to improve smart contract security and sets the stage for future research to further expand on this domain.
|
30 |
Capturing Style Through Large Language Models - An Authorship PerspectiveAnuj Dubey (18398505) 10 December 2024 (has links)
<p dir="ltr">This research investigates the use of Large Language Model (LLM) embeddings to capture the unique stylistic features of authors in Authorship Attribution (AA) tasks. Specifically, the focus of this research is on evaluating whether LLM-generated embeddings can effectively capture stylistic nuances that distinguish different authors, ultimately assessing their utility in tasks such as authorship attribution and clustering.The dataset comprises news articles from The Guardian authored by multiple writers, and embeddings were generated using OpenAI's text-embedding-ada-002 model. These embeddings were subsequently passed through a Siamese network with the objective of determining whether pairs of texts were authored by the same individual. The resulting model was used to generate style embeddings for unseen articles, which were then evaluated through classification and cluster analysis to assess their effectiveness in identifying individual authors across varying text samples. The classification task tested the model's accuracy in distinguishing authors, while the clustering analysis examined whether style embeddings primarily captured authorial identity or reflected domain-specific topics.</p><p dir="ltr">Our findings demonstrate that the proposed architecture achieves high accuracy for authors not previously encountered, outperforming traditional stylometric features and highlighting the effectiveness of LLM-based style embeddings. Additionally, our experiments reveal that authorship attribution accuracy decreases as the number of authors increases, yet improves with longer text lengths. </p><p dir="ltr"><br></p>
|
Page generated in 0.0233 seconds