• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • Tagged with
  • 7
  • 7
  • 7
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Deriving System Vulnerabilities Using Log Analytics

Higbee, Matthew Somers 01 November 2015 (has links)
System Administrators use many of the same tactics that are implemented by hackers to validate the security of their systems, such as port scanning and vulnerability scanning. Port scanning is slow, and can be highly inaccurate. After a scan is complete, the results of the scan must be cross checked with a vulnerability database to discover if any vulnerabilities are present. While these techniques are useful, they have severe limitations. System Administrators have full access to all of their machines. They should not have to rely exclusively on port scanning them from the outside of their machines to check for vulnerabilities when they have this level of access. This thesis introduces a novel concept for replacing port scanning with a Log File Inventory Management System. This system will be able to automatically build an accurate system inventory using existing log files. This system inventory will then be automatically cross checked with a database of known vulnerabilities in real-time resulting in faster and more accurate vulnerability reporting than is found in traditional port scanning methods.
2

Investigation and Implementation of a Log Management and Analysis Framework for the Treatment Planning System RayStation

Norrby, Elias January 2018 (has links)
The purpose of this thesis is to investigate and implement a framework for log management and analysis tailored to the treatment planning system (TPS) RayStation. A TPS is a highly advanced software package used in radiation oncology clinics, and the complexity of the software makes writing robust code challenging. Although the product is tested rigorously during development, bugs are present in released software. The purpose of the the framework is to allow the RayStation development team insight into errors encountered in clinics by centralizing log file data recorded at clinics around the world. A framework based on the Elastic stack, a suite of open-source products, is proposed, addressing a set of known issues described as the access problem, the processing problem, and the analysis problem. Firstly, log files are stored locally on each machine running RayStation, some of which may not be connected to the Internet. Gaining access to the data is further complicated by legal frameworks such as HIPAA and GDPR that put constraints on how clinic data can be handled. The framework allows for access to the files while respecting these constraints. Secondly, log files are written in several different formats. The framework is flexible enough to process files of multiple different formats and consistently extracts relevant information. Thirdly, the framework offers comprehensive tools for analyzing the collected data. Deployed in-house on a set of 38 machines used by the RayStation development team, the framework was demonstrated to offer solutions to each of the listed problems.
3

Samling, sökning och visualisering av loggfiler från testenheter

Rosenqvist, Fredrik, Henriksson, Thomas January 2015 (has links)
Idag genererar företag stora mängder av loggfiler vilket gör det svårt att hitta och undersöka felmeddelanden i alla loggfiler. En loggsamlare med Logstash, Elasticsearch och Kibana som bas har implementerats hos Ericsson Linköping. Loggsamlarens syfte är att samla loggar från testenheter och möjliggöra sökning och visualisering av dem. En utvärdering av Elasticsearch har genomförts för att se i vilken grad söktiden för sökfrågor ökar med ökad datamängd. Utvärderingen gav en indikation om att söktiden i värsta fallet är linjär.
4

Analysis of Diameter Log Files with Elastic Stack / Analysering av Diameter log filer med hjälp av Elastic Stack

Olars, Sebastian January 2020 (has links)
There is a growing need for more efficient tools and services for log analysis. A need that comes from the ever-growing use of digital services and applications, each one generating thousands of lines of log event message for the sake of auditing and troubleshooting. This thesis was initiated on behalf of one of the departments of the IT consulting company TietoEvry in Karlstad. The purpose of this thesis project was to investigate whether the log analysis service Elastic Stack would be a suitable solution for TietoEvry’s need for a more efficient method of log event analysis. As part of this investigation, a small-scale deployment of Elastic Stack was created, used as proof of concept. The investigation showed that Elastic Stack would be a suitable tool for the monitoring and analysis needs of TietoEvry. The final version of deployment was, however, not able to fulfill all of the requirements that were initially set out by TietoEvry, however, this was mainly due to a lack of time and rather than limitations of Elastic Stack.
5

A comparative analysis of log management solutions: ELK stack versus PLG stack

Eriksson, Joakim, Karavek, Anawil January 2023 (has links)
Managing and analyzing large volumes of logs can be challenging, and a log management solution can effectively address this issue. However, selecting the right log management solution can be a daunting task, considering various factors such as desired features and the solution's efficiency in terms of storage and resource usage. This thesis addressed the problem of choosing between two log management solutions: ELK and PLG. We compared their tailing agents, log storage and visualization capabilities to provide an analysis of their pros and cons. To compare the two log management solutions we conducted two types of evaluations: performance and functional evaluation. Together these two evaluations provide a comprehensive picture of each tool's capabilities. The study found that PLG is more resource-efficient in terms of CPU and memory compared to ELK, and requires less disk space to store logs. ELK, however, performs better in terms of query request time. ELK has a more user-friendly interface and requires minimal configuration, while PLG requires more configuration but provides more control for experienced users. With this study, we hope to provide organizations and individuals with a summary of the pros and cons of ELK and PLG that can help when choosing a log management solution.
6

A Comparative Analysis of the Ingestion and Storage Performance of Log Aggregation Solutions: Elastic Stack & SigNoz

Duras, Robert January 2024 (has links)
As infrastructures and software grow in complexity the need to keep track of things becomes important. It is the job of log aggregation solutions to condense log data into a form that is easier to search, visualize, and analyze. There are many log aggregation solutions out there today with various pros and cons to fit the various types of data and architectures. This makes the choice of selecting a log aggregation solution an important one. This thesis analyzes two full-stack log aggregation solutions, Elastic stack and SigNoz, with the goal of evaluating how the ingestion and storage components of the two stacks perform with smaller and larger amounts of data. The evaluation of these solutions was done by ingesting log files of varying sizes into them while tracking their performance. These performance metrics were then analyzed to find similarities and differences. The thesis found that SigNoz featured a higher CPU usage on average, faster processing times, and lower memory usage. Elastic stack was found to do more processing and indexing on the data, requiring more memory and storage space to allow for more detailed searchability of the ingested data. This also meant that there was a larger storage space requirement for Elastic stack than SigNoz to store the ingested logs. The hope of this thesis is that these findings can be used to provide insight into the area and aid those choosing between the two solutions in making a more informed decision.
7

Identifiering av anomalier i COSMIC genom analys av loggar / Identification of anomalies in COSMIC through log analysis

Al-egli, Muntaher, Zeidan Nasser, Adham January 2015 (has links)
Loggar är en viktig del av alla system, det ger en inblick i vad som sker. Att analysera loggar och extrahera väsentlig information är en av de största trenderna nu inom IT-branchen. Informationen i loggar är värdefulla resurser som kan användas för att upptäcka anomalier och hantera dessa innan det drabbar användaren. I detta examensarbete dyker vi in i grunderna för informationssökning och analysera undantagsutskrifter i loggar från COSMIC för att undersöka om det är möjligt att upptäcka anomalier med hjälp av retrospektivdata. Detta examensarbete ger även en inblick i möjligheten att visualisera data från loggar och erbjuda en kraftfull sökmotor. Därför kommer vi att fördjupa oss i de tre välkända program som adresserar frågorna i centraliserad loggning: Elasticsearch, Logstash och Kibana. Sammanfattningsvis visar resultatet att det är möjligt att upptäckta anomalier genom att tillämpa statistiska metoder både på retrospektiv- och realtidsdata. / Logs are an important part of any system; it provides an insight into what is happening. One of the biggest trends in the IT industry is analyzing logs and extracting essential information. The information in the logs are valuable resources that can be used to detect anomalies and manage them before it affects the user In this thesis we will dive into the basics of the information retrieval and analyze exceptions in the logs from COSMIC to investigate whether it is feasible to detect anomalies using retrospective data. This thesis also gives an insight into whether it’s possible to visualize data from logs and offer a powerful search engine. Therefore we will dive into the three well known applications that addresses the issues in centralized logging: Elasticsearch, Logstash and Kibana. In summary, our results shows that it’s possible to detected anomalies by applying statistical methods on both in retrospective and real time data.

Page generated in 0.0279 seconds