• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 18
  • 15
  • 13
  • 12
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

none

Lin, Ming-Tung 21 July 2003 (has links)
none
2

Frequent Inventory of Network Devices for Incident Response: A Data-driven Approach to Cybersecurity and Network Operations

Kobezak, Philip D. 22 May 2018 (has links)
Challenges exist in higher education networks with host inventory and identification. Any student, staff, faculty, or dedicated IT administrator can be the primary responsible personnel for devices on the network. Confounding the problem is that there is also a large mix of personally-owned devices. These network environments are a hybrid of corporate enterprise, federated network, and Internet service provider. This management model has survived for decades based on the ability to identify responsible personnel when a host, system, or user account is suspected to have been compromised or is disrupting network availability for others. Mobile devices, roaming wireless access, and users accessing services from multiple devices has made the task of identification onerous. With increasing numbers of hosts on networks of higher education institutions, strategies such as dynamic addressing and address translation become necessary. The proliferation of the Internet of Things (IoT) makes this identification task even more difficult. Loss of intellectual property, extortion, theft, and reputational damage are all significant risks to research institution networks. Quickly responding to and remediating incidents reduces exposure and risk. This research evaluates what universities are doing for host inventory and creates a working prototype of a system for associating relevant log events to one or more responsible people. The prototype reduces the need for human-driven updates while enriching the dynamic host inventory with additional information. It also shows the value of associating application and service authentications to hosts. The prototype uses live network data which is de-identified to protect privacy. / Master of Science / Keeping track of computers or hosts on a network has become increasingly difficult. In the past, most of the hosts were owned by the institution, but now more hosts are owned by the end users. The management of institution networks has become a mix of corporate enterprise, federated network, and Internet service provider. This model has survived for decades based on the ability to identify someone responsible when a host or system is suspected to be infected with malware or is disrupting network availability for others. Mobile devices, roaming wireless access, and users accessing services from multiple devices has made the task of identification more difficult. With increasing numbers of hosts on networks of higher education institutions, strategies such as dynamic addressing and address translation become necessary. The proliferation of the Internet of Things (IoT) makes identification even more difficult. Loss of intellectual property, theft, and reputational damage are all significant risks to institution networks. Quickly responding to and remediating cybersecurity incidents reduces exposure and risk. This research considers what universities are doing for host inventory and creates a working prototype of a system for associating relevant log events to one or more responsible people. The prototype reduces the need for human-driven updates while incorporating additional information for the dynamic host inventory. It also shows the value of associating application and service authentications to hosts. The prototype uses real network data which is de-identified to protect privacy.
3

Recognition of Infrastructure Events Using Principal Component Analysis

Broadbent, Lane David 01 December 2016 (has links)
Information Technology systems generate system log messages to allow for the monitoring of the system. In increasingly large and complex systems the volume of log data can overwhelm the analysts tasked with monitoring these systems. A system was developed that utilizes Principal Component Analysis to assist the analyst in the characterization of system health and events. Once trained, the system was able to accurately identify a state of heavy load on a device with a low false positive rate. The system was also able to accurately identify an error condition when trained on a single event. The method employed is able to assist in the real time monitoring of large complex systems, increasing the efficiency of trained analysts.
4

VIRTUALIZED CLOUD PLATFORM MANAGEMENT USING A COMBINED NEURAL NETWORK AND WAVELET TRANSFORM STRATEGY

Liu, Chunyu 01 March 2018 (has links)
This study focuses on implementing a log analysis strategy that combines a neural network algorithm and wavelet transform. Wavelet transform allows us to extract the important hidden information and features of the original time series log data and offers a precise framework for the analysis of input information. While neural network algorithm constitutes a powerfulnonlinear function approximation which can provide detection and prediction functions. The combination of the two techniques is based on the idea of using wavelet transform to denoise the log data by decomposing it into a set of coefficients, then feed the denoised data into a neural network. The experimental outputs reveal that this strategy can have a better ability to identify the patterns among problems in a log dataset, and make predictions with a better accuracy. This strategy can help the platform maintainers to adopt corresponding actions to eliminate risks before the occurrence of serious damages.
5

An Investigation of Regional Variations of Barnett Shale Reservoir Properties, and Resulting Variability of Hydrocarbon Composition and Well Performance

Tian, Yao 2010 May 1900 (has links)
In 2007, the Barnett Shale in the Fort Worth basin of Texas produced 1.1 trillion cubic feet (Tcf) gas and ranked second in U.S gas production. Despite its importance, controls on Barnett Shale gas well performance are poorly understood. Regional and vertical variations of reservoir properties and their effects on well performances have not been assessed. Therefore, we conducted a study of Barnett Shale stratigraphy, petrophysics, and production, and we integrated these results to clarify the controls on well performance. Barnett Shale ranges from 50 to 1,100 ft thick; we divided the formation into 4 reservoir units that are significant to engineering decisions. All but Reservoir Unit 1 (the lower reservoir unit) are commonly perforated in gas wells. Reservoir Unit 1 appears to be clay-rich shale and ranges from 10 to 80 ft thick. Reservoir Unit 2 is laminated, siliceous mudstone and marly carbonate zone, 20 to 300 ft thick. Reservoir Unit 3 is composed of multiple, stacked, thin (~15-30 ft thick), upward coarsening sequences of brittle carbonate and siliceous units interbedded with ductile shales; thickness ranges from 0 to 500 ft. Reservoir Unit 4, the upper Barnett Shale is composed dominantly of shale interbedded with upward coarsening, laterally persistent, brittle/ductile sequences ranging from 0 to 100 ft thick. Gas production rates vary directly with Barnett Shale thermal maturity and structural setting. For the following five production regions that encompass most of the producing wells, Peak Monthly gas production from horizontal wells decreases as follows: Tier 1 (median production 60 MMcf) to Core Area to Parker County to Tier 2 West to Oil Zone-Montague County (median production 10 MMcf). The Peak Monthly oil production from horizontal wells is in the inverse order of gas production; median Peak Monthly oil production is 3,000 bbl in the Oil Zone-Montague County and zero in Tier 1. Generally, horizontal wells produce approximately twice as much oil and gas as vertical wells.This research clarifies regional variations of reservoir and geologic properties of the Barnett Shale. Result of these studies should assist operators with optimization of development strategies and gas recovery from the Barnett Shale.
6

An Investigation of Regional Variations of Barnett Shale Reservoir Properties, and Resulting Variability of Hydrocarbon Composition and Well Performance

Tian, Yao 2010 May 1900 (has links)
In 2007, the Barnett Shale in the Fort Worth basin of Texas produced 1.1 trillion cubic feet (Tcf) gas and ranked second in U.S gas production. Despite its importance, controls on Barnett Shale gas well performance are poorly understood. Regional and vertical variations of reservoir properties and their effects on well performances have not been assessed. Therefore, we conducted a study of Barnett Shale stratigraphy, petrophysics, and production, and we integrated these results to clarify the controls on well performance. Barnett Shale ranges from 50 to 1,100 ft thick; we divided the formation into 4 reservoir units that are significant to engineering decisions. All but Reservoir Unit 1 (the lower reservoir unit) are commonly perforated in gas wells. Reservoir Unit 1 appears to be clay-rich shale and ranges from 10 to 80 ft thick. Reservoir Unit 2 is laminated, siliceous mudstone and marly carbonate zone, 20 to 300 ft thick. Reservoir Unit 3 is composed of multiple, stacked, thin (~15-30 ft thick), upward coarsening sequences of brittle carbonate and siliceous units interbedded with ductile shales; thickness ranges from 0 to 500 ft. Reservoir Unit 4, the upper Barnett Shale is composed dominantly of shale interbedded with upward coarsening, laterally persistent, brittle/ductile sequences ranging from 0 to 100 ft thick. Gas production rates vary directly with Barnett Shale thermal maturity and structural setting. For the following five production regions that encompass most of the producing wells, Peak Monthly gas production from horizontal wells decreases as follows: Tier 1 (median production 60 MMcf) to Core Area to Parker County to Tier 2 West to Oil Zone-Montague County (median production 10 MMcf). The Peak Monthly oil production from horizontal wells is in the inverse order of gas production; median Peak Monthly oil production is 3,000 bbl in the Oil Zone-Montague County and zero in Tier 1. Generally, horizontal wells produce approximately twice as much oil and gas as vertical wells.This research clarifies regional variations of reservoir and geologic properties of the Barnett Shale. Result of these studies should assist operators with optimization of development strategies and gas recovery from the Barnett Shale.
7

An "Interest" Index for WWW Servers and CyberRanking

YAMAMOTO, Shuichiro, MOTODA, Toshihiro, HATASHIMA, Takashi 20 April 2000 (has links)
No description available.
8

PETROPHYSICAL ANALYSIS OF WELLS IN THE ARIKAREE CREEK FIELD, COLORADO TO DEVELOP A PREDICTIVE MODEL FOR HIGH PRODUCTION

DePriest, Keegan 01 December 2019 (has links)
All the oil and gas wells producing in the Arikaree Creek Field, Colorado targeted the Spergen Formation along similar structures within a wrench fault system; however, the wells have vastly different production values. This thesis develops a predictive model for high production in the field while also accounting for a failed waterflood event that was initiated in 2016. Petrophysical analysis of thirteen wells show that high producing wells share common characteristics of pay zone location, lithology, porosity and permeability with one another and that the Spergen Formation is not homogenous. Highly productive wells have pay zones in the lower part of the formation in sections that are dolomitized, and have anonymously high water saturation. This is likely related to the paragenesis of the formation that dolomitized the lower parts of the formation, increasing porosity and permeability, but leaving the pay zones with the high water saturation values. This heterogeneity likely accounts for the failed waterflood. Results show that the important petrophysical components for highly productive wells are the location of the payzone within the reservoir, porosity, permeability and water saturation. Additionally, homogeneity is crucial for successful waterflooding, which was not present.
9

Automating Log Analysis

Kommineni, Sri Sai Manoj, Dindi, Akhila January 2021 (has links)
Background: With the advent of the information age, there are many large numbers of services rising which run on several clusters of computers.  Maintaining such large complex systems is a very difficult task. Developers use one tool which is common for almost all software systems, they are the console logs. To troubleshoot problems, developers refer to these logs to solve the issue. Identifying anomalies in the logs would lead us to the cause of the problem, thereby automating the analysis of logs. This study focuses on anomaly detection in logs. Objectives: The main goal of the thesis is to identify different algorithms for anomaly detection in logs, implement the algorithms and compare them by doing an experiment. Methods: A literature review had been conducted for identifying the most suitable algorithms for anomaly detection in logs. An experiment was conducted to compare the algorithms identified in the literature review. The experiment was performed on a dataset of logs generated by Hadoop Data File System (HDFS) servers which consisted of more than 11 million lines of logs. The algorithms that have been compared are K-means, DBSCAN, Isolation Forest, and Local Outlier Factor algorithms which are all unsupervised learning algorithms. Results: The performance of all these algorithms has been compared using metrics precision, recall, accuracy, F1 score, and run time. Though DBSCAN was the fastest, it resulted in poor recall, similarly Isolation Forest also resulted in poor recall. Local Outlier Factor was the fastest to predict. K-means had the highest precision and Local Outlier Factor had the highest recall, accuracy, and F1 score. Conclusion: After comparing the metrics of different algorithms, we conclude that Local Outlier Factor performed better than the other algorithms with respect to most of the metrics measured.
10

Root Cause Analysis and Classification for Firewall Log Events Using NLP Methods / Rotorsaksanalys och klassificering för brandväggslogghändelser med hjälp av NLP-metoder

Wang, Tongxin January 2022 (has links)
Network log records are robust evidence for enterprises to make error diagnoses. The current method of Ericsson’s Networks team for troubleshooting is mainly by manual observation. However, as the system is getting vast and complex, the log messages show a growth trend. At this point, it is vital to accurately and quickly discern the root cause of error logs. This thesis proposes models that can address two main problems applying Natural Language Processing methods: manual log root cause classification is progressed to automated classification and Question Answering (QA) system to give root cause directly. Models are validated on Ericsson’s firewall traffic data. Different feature extraction methods and classification models are chosen, with the more effective Term Frequency-Inverse Document Frequency (TF-IDF) method combined with a Random Forest classifier obtaining the F1 score of 0.87 and Bidirectional Encoder Representations from Transformers (BERT) fine-tuned classification obtaining the F1 score of 0.90. The validated QA model also gets good performance in quality assessment. The final results demonstrate that the proposed models can optimize manual analysis. While choosing algorithms, deep learning models such as BERT can produce similar or even better results than Random Forest and Naive Bayes classifiers. However, it is complex to implement the BERT since it requires more resources compared to more straightforward solutions and more caution. / Nätverksloggposter är robusta bevis för företag att göra feldiagnoser. Ericssons nätverksteams nuvarande metod för felsökning är huvudsakligen manuell observation. Men eftersom systemet blir stort och komplext visar loggmeddelandena en tillväxttrend. Vid denna tidpunkt är det viktigt att noggrant och snabbt urskilja grundorsaken till felloggar. Den här avhandlingen föreslår modeller som kan lösa två huvudproblem vid tillämpning av Natural Language Processing-metoder: manuell logggrundorsaksklassificering går vidare till automatiserad klassificering och QA-system (Question Answering) för att ge grundorsaken direkt. Modellerna är validerade på Ericssons brandväggstrafikdata. Olika funktionsextraktionsmetoder och klassificeringsmodeller valdes, med den mer effektiva metoden Term Frequency-Inverse Document Frequency (TF-IDF) kombinerad med en Random Forest-klassificerare som fick ett F1-poäng på 0,87 och Bidirectional Encoder Representations from Transformers (BERT) finjusterade klassificering som erhåller en F1-poäng på 0,90. Den validerade QA-modellen får också bra prestanda vid kvalitetsbedömning. De slutliga resultaten visar att de föreslagna modellerna kan optimera manuell analys. När man väljer algoritmer kan djupinlärningsmodeller som BERT ge liknande eller till och med bättre resultat än Random Forest och Naive Bayes klassificerare. Det är dock komplicerat att implementera BERT eftersom det kräver mer resurser jämfört med enklare lösningar och mer försiktighet.

Page generated in 0.0795 seconds