• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 340
  • 134
  • 37
  • 33
  • 32
  • 31
  • 16
  • 12
  • 11
  • 10
  • 8
  • 6
  • 6
  • 5
  • 4
  • Tagged with
  • 769
  • 118
  • 86
  • 84
  • 84
  • 72
  • 62
  • 58
  • 51
  • 50
  • 49
  • 48
  • 44
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

The Rhetoric Revision Log: A Second Study on a Feedback Tool for ESL Student Writing

Cole, Natalie Marie 01 December 2017 (has links)
A common pattern in teacher feedback to ESL writing is to provide students feedback on primarily grammar, often sidelining content (Ferris, 2003). This research is a second study of an original study done by Yi (2010) on a rhetoric revision log. This Rhetoric Revision Log (RRL) helped teachers and students track content errors in writing. This research further studies the success of the RRL with some minor changes made based on previous research results. Data consists of the Rhetoric Revision Log (RRL) given to 42 students in three different ESL writing classes at the same level with four different teachers. All students' pretests, posttests, response to surveys in regards to the use of the log, response to interviews in regards to the log, and the data on content-based needed revisions were analyzed. Teachers' responses in interviews were examined, as well, to draw conclusions about the efficacy of the log. Results show that the use of the RRL helped students reduce content errors in writing. Findings from student surveys and interviews indicate that a majority of students find the RRL beneficial, and teacher interviews provided positive feedback about the implementation of the log in ESL writing classes.
72

Root Cause Analysis and Classification for Firewall Log Events Using NLP Methods / Rotorsaksanalys och klassificering för brandväggslogghändelser med hjälp av NLP-metoder

Wang, Tongxin January 2022 (has links)
Network log records are robust evidence for enterprises to make error diagnoses. The current method of Ericsson’s Networks team for troubleshooting is mainly by manual observation. However, as the system is getting vast and complex, the log messages show a growth trend. At this point, it is vital to accurately and quickly discern the root cause of error logs. This thesis proposes models that can address two main problems applying Natural Language Processing methods: manual log root cause classification is progressed to automated classification and Question Answering (QA) system to give root cause directly. Models are validated on Ericsson’s firewall traffic data. Different feature extraction methods and classification models are chosen, with the more effective Term Frequency-Inverse Document Frequency (TF-IDF) method combined with a Random Forest classifier obtaining the F1 score of 0.87 and Bidirectional Encoder Representations from Transformers (BERT) fine-tuned classification obtaining the F1 score of 0.90. The validated QA model also gets good performance in quality assessment. The final results demonstrate that the proposed models can optimize manual analysis. While choosing algorithms, deep learning models such as BERT can produce similar or even better results than Random Forest and Naive Bayes classifiers. However, it is complex to implement the BERT since it requires more resources compared to more straightforward solutions and more caution. / Nätverksloggposter är robusta bevis för företag att göra feldiagnoser. Ericssons nätverksteams nuvarande metod för felsökning är huvudsakligen manuell observation. Men eftersom systemet blir stort och komplext visar loggmeddelandena en tillväxttrend. Vid denna tidpunkt är det viktigt att noggrant och snabbt urskilja grundorsaken till felloggar. Den här avhandlingen föreslår modeller som kan lösa två huvudproblem vid tillämpning av Natural Language Processing-metoder: manuell logggrundorsaksklassificering går vidare till automatiserad klassificering och QA-system (Question Answering) för att ge grundorsaken direkt. Modellerna är validerade på Ericssons brandväggstrafikdata. Olika funktionsextraktionsmetoder och klassificeringsmodeller valdes, med den mer effektiva metoden Term Frequency-Inverse Document Frequency (TF-IDF) kombinerad med en Random Forest-klassificerare som fick ett F1-poäng på 0,87 och Bidirectional Encoder Representations from Transformers (BERT) finjusterade klassificering som erhåller en F1-poäng på 0,90. Den validerade QA-modellen får också bra prestanda vid kvalitetsbedömning. De slutliga resultaten visar att de föreslagna modellerna kan optimera manuell analys. När man väljer algoritmer kan djupinlärningsmodeller som BERT ge liknande eller till och med bättre resultat än Random Forest och Naive Bayes klassificerare. Det är dock komplicerat att implementera BERT eftersom det kräver mer resurser jämfört med enklare lösningar och mer försiktighet.
73

Optimizing log truck payload through improved weight control

Overboe, Paul David 24 July 2012 (has links)
Trucking of forest products is a very important segment of the harvesting process and it is monitored relatively closely by external sources. Load weight is the focal point of the attention received by log hauling. The optimization of load weights is therefore very important to a logging operation's success and this can be achieved only through adequate gross vehicle weight control. Methods of load weight control are reviewed and possible applications discussed in this report. Studies were conducted to evaluate the adequacy of load weight control achieved utilizing two quite different methods. A reporting technique which provided loader operators with information about trends in the delivery weights of trucks which they loaded was used to heighten their awareness of problem areas in load weight distributions. This study was conducted at two southern paper mills with substantially different truck weight regulation environments. Two separate case studies were conducted on Virginia loggers utilizing on-board electronic truck scales. Results of the loading study indicated that the passive treatment had affected the behavior of some of the producers studied. The behavioral changes observed generally improved the economic optimization of load delivery weights. The on-board scale studies indicated that the scale systems did perform well in the applications observed. However, the economic benefits associated with use of the scales were negligible for the two producers studied due to a reduction in delivery weights after installation of the scales. / Master of Science
74

A Computer Simulation Model for Predicting the Impacts of Log Truck Turn-Time on Timber Harvesting System Productivity

Barrett, Scott M. 09 February 2001 (has links)
A computer simulation model was developed to represent a logging contractor's harvesting and trucking system of wood delivery from the contractor's in-woods landing to the receiving mill. The Log Trucking System Simulation model (LTSS) focuses on the impacts to logging contractors as changes in truck turn times cause an imbalance between harvesting and trucking systems. The model was designed to serve as a practical tool that can illustrate the magnitude of cost and productivity changes as the delivery capacity of the contractor's trucking system changes. The model was used to perform incremental analyses using an example contractor's costs and production rates to illustrate the nature of impacts associated with changes in the contractor's trucking system. These analyses indicated that the primary impact of increased turn times occurs when increased delivery time decreases the number of loads per day the contractor's trucking system can deliver. When increased delivery times cause the trucking system to limit harvesting production, total costs per delivered ton increase. In cases where trucking significantly limits system production, total costs per delivered ton would decrease if additional trucks were added. The model allows the user to simulate a harvest with up to eight products trucked to different receiving mills. The LTSS model can be utilized without extensive data input requirements and serves as a user friendly tool for predicting cost and productivity changes in a logging contractor's harvesting and trucking system based on changes in truck delivery times. / Master of Science
75

A Log-Linear Analysis of a Set of Medical Data

Ko, Barbara Mook-Pik 02 1900 (has links)
<p> Methotrexate had been suspected to be harmful to the liver in psoriatic patients. The data of a prospective study to find out whether the drug affected the acquisition and worsening of various liver pathology was analysed. Personal particulars which would have adverse effect on the liver were also investigated. Log-linear models were fitted to this set of categorical data in the form of multidimensional contingency tables. It was found that methotrexate would be hepatotoxic if the drug was taken over a prolonged period and/or if the cumulative dose taken was large. Otherwise, methotrexate could be administered to psoriatic patients without causing much harm to the liver.</p> / Thesis / Master of Science (MSc)
76

BAYESIAN ANALYSIS OF LOG-BINOMIAL MODELS

ZHOU, RONG 13 July 2005 (has links)
No description available.
77

Automatic interpretation of computed tomography (CT) images for hardwood log defect detection

Li, Pei 18 November 2008 (has links)
This thesis describes the design of an image interpretation system for the automatic detection of internal hardwood log defects. The goal of the research is that such a system should not only be able to identify and locate internal defects of hardwood logs using computed tomography (CT) imagery, but also should be able to accommodate more than one type of wood, and should show potential for real-time industrial implementation. This thesis describes a new image classification system that utilizes a feed forward artificial neural network as the image classifier. The classifier was trained with back-propagation, using training samples collected from two different types of hardwood logs, red oak and water oak. Pre-processing and post-processing are performed to increase the system classification performance, and to make the system be able to accommodate more than one wood type. It is shown in this thesis that such a neural-net based approach can yield a high classification accuracy, and it shows a high potential for parallelism. Several possible design alternatives and comparisons are also addressed in the thesis. The final image interpretation system has been successfully tested, exhibiting a classification accuracy of 95% with test images from four hardwood logs. / Master of Science
78

Frequent Inventory of Network Devices for Incident Response: A Data-driven Approach to Cybersecurity and Network Operations

Kobezak, Philip D. 22 May 2018 (has links)
Challenges exist in higher education networks with host inventory and identification. Any student, staff, faculty, or dedicated IT administrator can be the primary responsible personnel for devices on the network. Confounding the problem is that there is also a large mix of personally-owned devices. These network environments are a hybrid of corporate enterprise, federated network, and Internet service provider. This management model has survived for decades based on the ability to identify responsible personnel when a host, system, or user account is suspected to have been compromised or is disrupting network availability for others. Mobile devices, roaming wireless access, and users accessing services from multiple devices has made the task of identification onerous. With increasing numbers of hosts on networks of higher education institutions, strategies such as dynamic addressing and address translation become necessary. The proliferation of the Internet of Things (IoT) makes this identification task even more difficult. Loss of intellectual property, extortion, theft, and reputational damage are all significant risks to research institution networks. Quickly responding to and remediating incidents reduces exposure and risk. This research evaluates what universities are doing for host inventory and creates a working prototype of a system for associating relevant log events to one or more responsible people. The prototype reduces the need for human-driven updates while enriching the dynamic host inventory with additional information. It also shows the value of associating application and service authentications to hosts. The prototype uses live network data which is de-identified to protect privacy. / Master of Science
79

Measuring and evaluating log truck performance in a variety of operating conditions

McCormack, Robert James 19 October 2005 (has links)
Studies of log truck speeds and fuel consumption were made at four location in the southeastern United States. Execution of the study necessitated the development and testing of a microprocessor based data logger capable of withstanding the harsh operating environment found in forest harvesting and transport equipment. The first study investigated the normal operating pattern for a truck in a logging contractors flect. The truck was found to be highly utilized and to incur considerable distances of unloaded running to service the contractor's widely separated operations. A second study highlighted the fucl and speed penalties associated with operations on sand and gravel roads. The third study documented significant performance differences between routes delivering to one location even where road surface differences were minimal. A fourth, detailed study illustrated speed and fuel consumption differences between urban and mural operations. Tests on a group of five experienced drivers demonstrated considerable differences in speed and fuel usage. Some drivers appeared to have a driving style which delivered higher speed with low fuel consumption. A detailed analysis of individual speed profiles indicated that as much as 1/3 to 1/2 of the recorded fuel consumption on one section was associated with air resistance. In conclusion the studies noted that for the trucks and conditions evaluated: (1) there are significant performance losses and increased costs associated with operations on low standard road sections. Road roughness was a significant factor determining speed. (2) performance and cost differences between routes were demonstrated even for roads of comparable road surface type. This indicated that inter-route costs differences may be pervasive. These differences would require acknowledgement and evaluation if equitable route payment schedules were to be constructed. (3) All the trucks studied operated for at least part of the time at high speeds and may be incurring unnecessary fuel and maintenance expenses. Application of aerodynamic deflectors might be beneficial and their applicability should be tested. (4) Some driving styles appear more efficient and deserve further investigation and documentation. Changing driver behavior might present the most cost effective means of improvement in fleet performance. / Ph. D.
80

A novel classification method applied to well log data calibrated by ontology based core descriptions

Graciolli, Vinicius Medeiros January 2018 (has links)
Um método para a detecção automática de tipos litológicos e contato entre camadas foi desenvolvido através de uma combinação de análise estatística de um conjunto de perfis geofísicos de poços convencionais, calibrado por descrições sistemáticas de testemunhos. O objetivo deste projeto é permitir a integração de dados de rocha em modelos de reservatório. Os testemunhos são descritos com o suporte de um sistema de nomenclatura baseado em ontologias que formaliza extensamente uma grande gama de atributos de rocha. As descrições são armazenadas em um banco de dados relacional junto com dados de perfis de poço convencionais de cada poço analisado. Esta estrutura permite definir protótipos de valores de perfil combinados para cada litologia reconhecida através do cálculo de média e dos valores de variância e covariância dos valores medidos por cada ferramenta de perfilagem para cada litologia descrita nos testemunhos. O algoritmo estatístico é capaz de aprender com cada novo testemunho e valor de log adicionado ao banco de dados, refinando progressivamente a identificação litológica. A detecção de contatos litológicos é realizada através da suavização de cada um dos perfis através da aplicação de duas médias móveis de diferentes tamanhos em cada um dos perfis. Os resultados de cada par de perfis suavizados são comparados, e as posições onde as linhas se cruzam definem profundidades onde ocorrem mudanças bruscas no valor do perfil, indicando uma potencial mudança de litologia. Os resultados da aplicação desse método em cada um dos perfis são então unificados em uma única avaliação de limites litológicos Os valores de média e variância-covariância derivados da correlação entre testemunhos e perfis são então utilizados na construção de uma distribuição gaussiana n-dimensional para cada uma das litologias reconhecidas. Neste ponto, probabilidades a priori também são calculadas para cada litologia. Estas distribuições são comparadas contra cada um dos intervalos litológicos previamente detectados por meio de uma função densidade de probabilidade, avaliando o quão perto o intervalo está de cada litologia e permitindo a atribuição de um tipo litológico para cada intervalo. O método desenvolvido foi testado em um grupo de poços da bacia de Sergipe- Alagoas, e a precisão da predição atingida durante os testes mostra-se superior a algoritmos clássicos de reconhecimento de padrões como redes neurais e classificadores KNN. O método desenvolvido foi então combinado com estes métodos clássicos em um sistema multi-agentes. Os resultados mostram um potencial significante para aplicação operacional efetiva na construção de modelos geológicos para a exploração e desenvolvimento de áreas com grande volume de dados de perfil e intervalos testemunhados. / A method for the automatic detection of lithological types and layer contacts was developed through the combined statistical analysis of a suite of conventional wireline logs, calibrated by the systematic description of cores. The intent of this project is to allow the integration of rock data into reservoir models. The cores are described with support of an ontology-based nomenclature system that extensively formalizes a large set of attributes of the rocks, including lithology, texture, primary and diagenetic composition and depositional, diagenetic and deformational structures. The descriptions are stored in a relational database along with the records of conventional wireline logs (gamma ray, resistivity, density, neutrons, sonic) of each analyzed well. This structure allows defining prototypes of combined log values for each lithology recognized, by calculating the mean and the variance-covariance values measured by each log tool for each of the lithologies described in the cores. The statistical algorithm is able to learn with each addition of described and logged core interval, in order to progressively refine the automatic lithological identification. The detection of lithological contacts is performed through the smoothing of each of the logs by the application of two moving means with different window sizes. The results of each pair of smoothed logs are compared, and the places where the lines cross define the locations where there are abrupt shifts in the values of each log, therefore potentially indicating a change of lithology. The results from applying this method to each log are then unified in a single assessment of lithological boundaries The mean and variance-covariance data derived from the core samples is then used to build an n-dimensional gaussian distribution for each of the lithologies recognized. At this point, Bayesian priors are also calculated for each lithology. These distributions are checked against each of the previously detected lithological intervals by means of a probability density function, evaluating how close the interval is to each lithology prototype and allowing the assignment of a lithological type to each interval. The developed method was tested in a set of wells in the Sergipe-Alagoas basin and the prediction accuracy achieved during testing is superior to classic pattern recognition methods such as neural networks and KNN classifiers. The method was then combined with neural networks and KNN classifiers into a multi-agent system. The results show significant potential for effective operational application to the construction of geological models for the exploration and development of areas with large volume of conventional wireline log data and representative cored intervals.

Page generated in 0.0581 seconds