• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 5
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 56
  • 56
  • 18
  • 15
  • 13
  • 12
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Analysis of Diameter Log Files with Elastic Stack / Analysering av Diameter log filer med hjälp av Elastic Stack

Olars, Sebastian January 2020 (has links)
There is a growing need for more efficient tools and services for log analysis. A need that comes from the ever-growing use of digital services and applications, each one generating thousands of lines of log event message for the sake of auditing and troubleshooting. This thesis was initiated on behalf of one of the departments of the IT consulting company TietoEvry in Karlstad. The purpose of this thesis project was to investigate whether the log analysis service Elastic Stack would be a suitable solution for TietoEvry’s need for a more efficient method of log event analysis. As part of this investigation, a small-scale deployment of Elastic Stack was created, used as proof of concept. The investigation showed that Elastic Stack would be a suitable tool for the monitoring and analysis needs of TietoEvry. The final version of deployment was, however, not able to fulfill all of the requirements that were initially set out by TietoEvry, however, this was mainly due to a lack of time and rather than limitations of Elastic Stack.
22

Log Frequency Analysis for Anomaly Detection in Cloud Environments

Bendapudi, Prathyusha January 2024 (has links)
Background: Log analysis has been proven to be highly beneficial in monitoring system behaviour, detecting errors and anomalies, and predicting future trends in systems and applications. However, with continuous evolution of these systems and applications, the amount of log data generated on a timely basis is increasing rapidly. Hence, the amount of manual effort invested in log analysis for error detection and root cause analysis is also increasing. While there is continuous research to reduce manual effort, This Thesis introduced a new approach based on the temporal patternsof logs in a particular system environment, to the current scenario of automated log analysis which can help in reducing manual effort to a great extent. Objectives: The main objective of this research is to identify temporal patterns in logs using clustering algorithms, extract the outlier logs which do not adhere to any time pattern, and further analyse them to check if these outlier logs are helpful in error detection and identifying the root cause of the said errors. Methods: Design Science Research was implemented to fulfil the objectives of the thesis, as the thesis required generation of intermediary results and an iterative and responsive approach. The initial part of the thesis consisted of building an artifact which aided in identifying temporal patterns in the logs of different log types using DBSCAN clustering algorithm. After identification of patterns and extraction of outlier logs, Interviews were conducted which employed manual analysis of the outlier logs by system experts, who then provided insights on the logs and validated the log frequency analysis. Results: The results obtained after running the clustering algorithm on logs of different log types show clusters which represent temporal patterns in most of the files. There are log files which do not have any time patterns, which indicate that not all log types have logs which adhere to a fixed time pattern. The interviews conducted with system experts on the outlier logs yield promising results, indicating that the log frequency analysis is indeed helpful in reducing manual effort involved in log analysis for error detection and root cause analysis. Conclusions: The results of the Thesis show that most of the logs in the given cloud environment adhere to time frequency patterns, and analysing these patterns and their outliers will lead to easier error detection and root cause analysis in the given cloud environment.
23

Analysis and Modeling of World Wide Web Traffic

Abdulla, Ghaleb 30 April 1998 (has links)
This dissertation deals with monitoring, collecting, analyzing, and modeling of World Wide Web (WWW) traffic and client interactions. The rapid growth of WWW usage has not been accompanied by an overall understanding of models of information resources and their deployment strategies. Consequently, the current Web architecture often faces performance and reliability problems. Scalability, latency, bandwidth, and disconnected operations are some of the important issues that should be considered when attempting to adjust for the growth in Web usage. The WWW Consortium launched an effort to design a new protocol that will be able to support future demands. Before doing that, however, we need to characterize current users' interactions with the WWW and understand how it is being used. We focus on proxies since they provide a good medium or caching, filtering information, payment methods, and copyright management. We collected proxy data from our environment over a period of more than two years. We also collected data from other sources such as schools, information service providers, and commercial aites. Sampling times range from days to years. We analyzed the collected data looking for important characteristics that can help in designing a better HTTP protocol. We developed a modeling approach that considers Web traffic characteristics such as self-similarity and long-range dependency. We developed an algorithm to characterize users' sessions. Finally we developed a high-level Web traffic model suitable for sensitivity analysis. As a result of this work we develop statistical models of parameters such as arrival times, file sizes, file types, and locality of reference. We describe an approach to model long-range and dependent Web traffic and we characterize activities of users accessing a digital library courseware server or Web search tools. Temporal and spatial locality of reference within examined user communities is high, so caching can be an effective tool to help reduce network traffic and to help solve the scalability problem. We recommend utilizing our findings to promote a smart distribution or push model to cache documents when there is likelihood of repeat accesses. / Ph. D.
24

Assessing Anonymized System Logs Usefulness for Behavioral Analysis in RNN Models

Vagis, Tom Richard, Ghiasvand, Siavash 06 August 2024 (has links)
Assessing Anonymized System Logs Usefulness for Behavioral Analysis in RNN Models Tom Richard Vargis1,∗, Siavash Ghiasvand1,2 1Technische Universität Dresden, Germany 2Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig, Germany Abstract System logs are a common source of monitoring data for analyzing computing systems behavior. Due to the complexity of modern computing systems and the large size of collected monitoring data, automated analysis mechanisms are required. Numerous machine learning and deep learning methods are proposed to address this challenge. However, due to the existence of sensitive data in system logs their analysis and storage raise serious privacy concerns. Anonymization methods could be used to cleanse the monitoring data before analysis. However, anonymized system logs in general do not provide an adequate usefulness for majority of behavioral analysis. Content-aware anonymization mechanisms such as 𝑃𝛼𝑅𝑆 preserve the correlation of system logs even after anonymization. This work evaluates the usefulness of anonymized system logs of Taurus HPC cluster anonymized using 𝑃𝛼𝑅𝑆, for behavioural analysis via recurrent neural network models. To facilitate the reproducibility and further development of this work, the implemented prototype and monitoring data are publicly available [12].
25

Machine Learning-Assisted Log Analysis for Uncovering Anomalies

Rurling, Samuel January 2024 (has links)
Logs, which are semi-structured records of system runtime information, contain a lot of valuable insights. By looking at the logs, developers and operators can analyse their system’s behavior. This is especially necessary when something in the system goes wrong, as nonconforming logs may indicate a root cause. With the growing complexity and size of IT systems however, millions of logs are generated hourly. Reviewing them manually can therefore become an all consuming task. A potential solution to aid in log analysis is machine learning. By leveraging their ability to automatically learn from experience, machine learning algorithms can be modeled to automatically analyse logs. In this thesis, machine learning is used to perform anomaly detection, which is the discovery of so called nonconforming logs. An experiment is created in which four feature extraction methods - that is four ways of creating data representations from the logs - are tested in combination with three machine learning models. These models are: LogCluster, PCA and SVM. Additionally, a neural network architecture called an LSTM network is explored as well, a network that can craft its own features and analyse them. The results show that the LSTM performed the best, in terms of precision, recall and f1-score, followed by SVM, LogCluster and PCA, in combination with a feature extraction method using word embeddings.
26

[en] MODERNIZATION OF LEGACY SYSTEMS: AN AP PROACH BASED ON LOG ANALYSIS, QUESTIONNAIRES AND INTER VIEWS / [pt] MODERNIZAÇÃO DE SISTEMAS LEGADOS: UMA ABORDAGEM BASEADA EM ANÁLISE DE LOGS, QUESTIONÁRIOS E ENTREVISTAS

RODRIGO BRITO DE FREITAS LIMA 24 March 2025 (has links)
[pt] Sistemas legados ainda têm muita importância para corporações e usuários. O desafio de como modernizá-los é bastante explorado, seja na rees crita total do sistema ou na substituição das tecnologias subjacentes. Tentati vas ad hoc de modernização de sistemas podem ser caóticas e custosas. Muitos artigos vêm buscando formas de enfrentar esse desafio, mas não encontramos estudos satisfatórios que propusessem uma forma de manter as funcionalidades e construir um novo design de interface e melhor experiência do usuário transformando sistemas estáticos em sistemas inteligentes. Essa dissertação propõe uma abordagem para enfrentar esse desafio, trazer recomendações e tornar os sistemas legados em sistemas inteligentes com baixo custo e baixo esforço, mantendo as suas funcionalidades principais do sistema legado, através de análises de logs de uso, questionários e entrevistas com os usuários, identificamos dificuldades, pontos fracos do sistema, comportamento de uso e novas oportunidades de melhoria. / [en] Legacy systems still have a lot of importance for corporations and users. The challenge of how to modernize them is widely explored, whether by completely rewriting the system or replacing the underlying technologies. ad hoc attempts to modernize systems can be chaotic and costly. Many articles have been looking for ways to face this challenge, but we have not found satisfactory studies that propose a way to maintain functionalities and build a new interface design and better user experience by transforming static systems into intelligent systems. This dissertation proposes an approach to face this challenge, bring recommendations and turn legacy systems into intelligent systems with low cost and low effort, maintaining the main functionalities of the legacy system, through analysis of logs of use, questionnaires and interviews with users, we identify difficulties, system weaknesses, usage behavior and new opportunities for improvement.
27

Korektorské vlastnosti sedimentárních hornin z karotážních měření / Well log analysis for sedimentary formation evaluation

Šálek, Ondřej January 2013 (has links)
3 ABSTRACT The work is focused on analysis of five structural well profiles penetrating sediments of the Bohemian Cretaceous Basin and the underlying Upper Palaeozoic continental basins to the crystalline basement. The objectives of well profile analysis are sedimentary formation evaluation from well log analysis and statistical analysis and evaluation of some physical properties of sedimentary rocks, which have been determined by measurements of drill cores. The aim of the work is to verify the possibility of porosity evaluation from well log analysis in the Bohemian Cretaceous Basin and the underlying Upper Palaeozoic continental basins. The next aim is to compare different geological environments with respect to physical properties of rocks. The content of the work involves presentation of well log curves, computation of porosity values and comparison between the resulting values of porosity from resistivity log, acoustic log and neutron-neutron log and from laboratory measurements of drill core samples. Data from five deep structural wells are used. Different geological environments were compared by statistical methods with respect to physical properties of rocks measured on well core samples from these five wells. Porosity evaluation from well log analysis is difficult but it is possible provided that...
28

Knowledge Driven Search Intent Mining

Jadhav, Ashutosh 31 May 2016 (has links)
No description available.
29

In-situ stress analysis and fracture characterization in oil reservoirs with complex geological settings: A multi-methodological approach in the Zagros fold and thrust belt / 複雑な地質条件を有する石油貯留層における原位置応力とフラクチャーの総合解析:ザクロス褶曲衝上断層帯におけるマルチ手法の展開

Nazir, Mafakheri Bashmagh 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第25259号 / 工博第5218号 / 新制||工||1995(附属図書館) / 京都大学大学院工学研究科都市社会工学専攻 / (主査)教授 林 為人, 教授 村田 澄彦, 教授 福山 英一 / 学位規則第4条第1項該当 / Doctor of Agricultural Science / Kyoto University / DFAM
30

Evaluation of Automotive Data mining and Pattern Recognition Techniques for Bug Analysis

Gawande, Rashmi 02 February 2016 (has links) (PDF)
In an automotive infotainment system, while analyzing bug reports, developers have to spend significant time on reading log messages and trying to locate anomalous behavior before identifying its root cause. The log messages need to be viewed in a Traceviewer tool to read in a human readable form and have to be extracted to text files by applying manual filters in order to further analyze the behavior. There is a need to evaluate machine learning/data mining methods which could potentially assist in error analysis. One such method could be learning patterns for “normal” messages. “Normal” could even mean that they contain keywords like “exception”, “error”, “failed” but are harmless or not relevant to the bug that is currently analyzed. These patterns could then be applied as a filter, leaving behind only truly anomalous messages that are interesting for analysis. A successful application of the filter would reduce the noise, leaving only a few “anomalous” messages. After evaluation of the researched candidate algorithms, two algorithms namely GSP and FP Growth were found useful and thus implemented together in a prototype. The prototype implementation overall includes processes like pre-processing, creation of input, executing algorithms, creation of training set and analysis of new trace logs. Execution of prototype resulted in reducing manual effort thus achieving the objective of this thesis work.

Page generated in 0.0661 seconds