Spelling suggestions: "subject:"elastics""
21 |
A structured approach to selecting the most suitable log management system for an organizationKristiansson Herrera, Lucas January 2020 (has links)
With the advent of digitalization, a typical organization today will contain an ecosystem of servers, databases, and other components. These systems can produce large volumes of log data on a daily basis. By using a log management system (LMS) for collecting, structuring and analyzing these log events, an organization could benefit in their services. The primary intent with this thesis is to construct a decision model that will aid organizations in finding a LMS that most fit their needs. To construct such a model, a number of log management products are investigated that are both proprietary and open source. Furthermore, good practices of handling log data are investigated by reading various papers and books on the subject. The result is a decision model that can be used by an organization for preparing, implementing, maintaining and choosing a LMS. The decision model makes an attempt to quantify various properties such as product features, but the LMSs it suggests should mostly be seen as a decision basis. In order to make the decision model more comprehensive and usable, more products should be included in the model and other factors that could play a part in finding a suitable LMS should be investigated.
|
22 |
Analysis of Diameter Log Files with Elastic Stack / Analysering av Diameter log filer med hjälp av Elastic StackOlars, Sebastian January 2020 (has links)
There is a growing need for more efficient tools and services for log analysis. A need that comes from the ever-growing use of digital services and applications, each one generating thousands of lines of log event message for the sake of auditing and troubleshooting. This thesis was initiated on behalf of one of the departments of the IT consulting company TietoEvry in Karlstad. The purpose of this thesis project was to investigate whether the log analysis service Elastic Stack would be a suitable solution for TietoEvry’s need for a more efficient method of log event analysis. As part of this investigation, a small-scale deployment of Elastic Stack was created, used as proof of concept. The investigation showed that Elastic Stack would be a suitable tool for the monitoring and analysis needs of TietoEvry. The final version of deployment was, however, not able to fulfill all of the requirements that were initially set out by TietoEvry, however, this was mainly due to a lack of time and rather than limitations of Elastic Stack.
|
23 |
A comparative analysis of log management solutions: ELK stack versus PLG stackEriksson, Joakim, Karavek, Anawil January 2023 (has links)
Managing and analyzing large volumes of logs can be challenging, and a log management solution can effectively address this issue. However, selecting the right log management solution can be a daunting task, considering various factors such as desired features and the solution's efficiency in terms of storage and resource usage. This thesis addressed the problem of choosing between two log management solutions: ELK and PLG. We compared their tailing agents, log storage and visualization capabilities to provide an analysis of their pros and cons. To compare the two log management solutions we conducted two types of evaluations: performance and functional evaluation. Together these two evaluations provide a comprehensive picture of each tool's capabilities. The study found that PLG is more resource-efficient in terms of CPU and memory compared to ELK, and requires less disk space to store logs. ELK, however, performs better in terms of query request time. ELK has a more user-friendly interface and requires minimal configuration, while PLG requires more configuration but provides more control for experienced users. With this study, we hope to provide organizations and individuals with a summary of the pros and cons of ELK and PLG that can help when choosing a log management solution.
|
24 |
Cyber Threat Intelligence from Honeypot Data using ElasticsearchAl-Mohannadi, Hamad, Awan, Irfan U., Al Hamar, J., Cullen, Andrea J., Disso, Jules P., Armitage, Lorna 18 May 2018 (has links)
yes / Cyber attacks are increasing in every aspect of daily
life. There are a number of different technologies around to
tackle cyber-attacks, such as Intrusion Detection Systems (IDS),
Intrusion Prevention Systems (IPS), firewalls, switches, routers
etc., which are active round the clock. These systems generate
alerts and prevent cyber attacks. This is not a straightforward
solution however, as IDSs generate a huge volume of alerts that
may or may not be accurate: potentially resulting in a large
number of false positives. In most cases therefore, these alerts
are too many in number to handle. In addition, it is impossible to
prevent cyber-attacks simply by using tools. Instead, it requires
greater intelligence in order to fully understand an adversary’s
motive by analysing various types of Indicator of Compromise
(IoC). Also, it is important for the IT employees to have enough
knowledge to identify true positive attacks and act according to
the incident response process.
In this paper, we have proposed a new threat intelligence
technique which is evaluated by analysing honeypot log data to
identify behaviour of attackers to find attack patterns. To achieve
this goal, we have deployed a honeypot on an AWS cloud to
collect cyber incident log data. The log data is analysed by using
elasticsearch technology namely an ELK (Elasticsearch, Logstash
and Kibana) stack.
|
25 |
Utilizing GAN and Sequence Based LSTMs on Post-RF Metadata for Near Real Time AnalysisBarnes-Cook, Blake Alexander 17 January 2023 (has links)
Wireless anomaly detection is a mature field with several unique solutions. This thesis aims to describe a novel way of detecting wireless anomalies using metadata analysis based methods. The metadata is processed and analyzed by a LSTM based Autoencoder and a LSTM based feature analyzer to produce a wide range of anomaly scores. The anomaly scores are then uploaded and analyzed to identify any anomalous fluctuations. An associated tool can also automatically download live data, train, test, and upload results to the Elasticsearch database. The overall method described is in sharp contrast to the more weathered solution of analyzing raw data from a Software Designed Radio, and has the potential to be scaled much more efficiently. / Master of Science / Wireless communications are a major part of our world. Detecting unusual changes in the wireless spectrum is therefore a high priority in maintaining networks and more. This paper describes a method that allows centralized processing of wireless network output, allowing monitoring of several areas simultaneously. This is in sharp contrast to other methods which generally must be located near the area being monitored. In addition, this implementation has the capability to be scaled more efficiently as the hardware required to monitor is less costly than the hardware required to process wireless data.
|
26 |
A Performance Analysis of Intrusion Detection with Snort and Security Information Management / En Prestandaanalys av Intrångsdetektering med Snort och Hantering av SäkerhetsinformationThorarensen, Christian January 2021 (has links)
Network intrusion detection systems (NIDSs) are a major component in cybersecurity and can be implemented with open-source software. Active communities and researchers continue to improve projects and rulesets used for detecting threats to keep up with the rapid development of the internet. With the combination of security information management, automated threat detection updates and widely used software, the NIDS security can be maximized. However, it is not clear how different combinations of software and basic settings affect network performance. The main purpose in this thesis was to find out how multithreading, standard ruleset configurations and near real-time data shipping affect Snort IDS’ online and offline performance. Investigations and results were designed to guide researchers or companies to enable maximum security with minimum impact on connectivity. Software used in performance testing was limited to Snort 2.9.17.1-WIN64 (IDS), Snort 3.1.0.0 (IDS), PulledPork (rule management) and Open Distro for Elasticsearch (information management). To increase the replicability of this study, the experimentation method was used, and network traffic generation was limited to 1.0 Gbit/s hardware. Offline performance was tested with traffic recorded from a webserver during February 2021 to increase the validity of test results, but detection of attacks was not the focus. Through experimentation it was found that multithreading enabled 68-74% less runtime for offline analysis on an octa-thread system. On the same system, Snort’s drop rate was reduced from 9.0% to 1.1% by configuring multiple packet threads for 1.0 Gbit/s traffic. Secondly, Snort Community and Proofpoint ET Open rulesets showed approximately 1% and 31% dropped packets, respectively. Finally, enabling data shipping services to integrate Snort with Open Distro for Elasticsearch (ODFE) did not have any negative impact on throughput, network delay or Snort’s drop rate. However, the usability of ODFE needs further investigation. In conclusion, Snort 3 multithreading enabled major performance benefits but not all open-source rules were available. In future work, the shared security information management solution could be expanded to include multiple Snort sensors, triggers, alerting (email) and suggested actions for detected threats.
|
27 |
Cyber Attack Modelling using Threat Intelligence. An investigation into the use of threat intelligence to model cyber-attacks based on elasticsearch and honeypot data analysisAl-Mohannadi, Hamad January 2019 (has links)
Cyber-attacks have become an increasing threat to organisations as well as the wider public. This has led to greatly negative impacts on the economy at large and on the everyday lives of people. Every successful cyber attack on targeted devices and networks highlights the weaknesses within the defense mechanisms responsible for securing them. Gaining a thorough understanding of cyber threats beforehand is therefore essential to prevent potential attacks in the future. Numerous efforts have been made to avoid cyber-attacks and protect the valuable assets of an organisation. However, the most recent cyber-attacks have exhibited the profound levels of sophistication and intelligence of the attacker, and have shown conven- tional attack detection mechanisms to fail in several attack situations. Several researchers have highlighted this issue previously, along with the challenges faced by alternative solu- tions. There is clearly an unprecedented need for a solution that takes a proactive approach to understanding potential cyber threats in real-time situations.
This thesis proposes a progressive and multi-aspect solution comprising of cyber-attack modeling for the purpose of cyber threat intelligence. The proposed model emphasises on approaches from organisations to understand and predict future cyber-attacks by collecting and analysing network events to identify attacker activity. This could then be used to understand the nature of an attack to build a threat intelligence framework. However, collecting and analysing live data from a production system can be challenging and even dangerous as it may lead the system to be more vulnerable. The solution detailed in this thesis deployed cloud-based honeypot technology, which is well-known for mimicking the real system while collecting actual data, to see network activity and help avoid potential attacks in near real-time.
In this thesis, we have suggested a new threat intelligence technique by analysing attack data collected using cloud-based web services in order to identify attack artefacts and support active threat intelligence. This model was evaluated through experiments specifically designed using elastic stack technologies. The experiments were designed to assess the identification and prediction capability of the threat intelligence system for several different attack cases. The proposed cyber threat intelligence and modeling systems showed significant potential to detect future cyber-attacks in real-time. / Government of Qatar
|
28 |
Webová aplikace pro pořizování nových záběrů historických fotografií / Web App for Capturing New Shots of Historical PhotographsSikora, Martin January 2018 (has links)
The aim of this diploma thesis is to design and implement a web application focused on rephotography management. Analyze existing solutions, create list of features and simple graphical user interface. It also includes a design of API structure to communicate with the mobile application. Essential application requirements include adding photos on a map and combining different photos in a photo editor with enhanced auto-alignment features.
|
29 |
A Comparative Analysis of the Ingestion and Storage Performance of Log Aggregation Solutions: Elastic Stack & SigNozDuras, Robert January 2024 (has links)
As infrastructures and software grow in complexity the need to keep track of things becomes important. It is the job of log aggregation solutions to condense log data into a form that is easier to search, visualize, and analyze. There are many log aggregation solutions out there today with various pros and cons to fit the various types of data and architectures. This makes the choice of selecting a log aggregation solution an important one. This thesis analyzes two full-stack log aggregation solutions, Elastic stack and SigNoz, with the goal of evaluating how the ingestion and storage components of the two stacks perform with smaller and larger amounts of data. The evaluation of these solutions was done by ingesting log files of varying sizes into them while tracking their performance. These performance metrics were then analyzed to find similarities and differences. The thesis found that SigNoz featured a higher CPU usage on average, faster processing times, and lower memory usage. Elastic stack was found to do more processing and indexing on the data, requiring more memory and storage space to allow for more detailed searchability of the ingested data. This also meant that there was a larger storage space requirement for Elastic stack than SigNoz to store the ingested logs. The hope of this thesis is that these findings can be used to provide insight into the area and aid those choosing between the two solutions in making a more informed decision.
|
30 |
Identifiering av anomalier i COSMIC genom analys av loggar / Identification of anomalies in COSMIC through log analysisAl-egli, Muntaher, Zeidan Nasser, Adham January 2015 (has links)
Loggar är en viktig del av alla system, det ger en inblick i vad som sker. Att analysera loggar och extrahera väsentlig information är en av de största trenderna nu inom IT-branchen. Informationen i loggar är värdefulla resurser som kan användas för att upptäcka anomalier och hantera dessa innan det drabbar användaren. I detta examensarbete dyker vi in i grunderna för informationssökning och analysera undantagsutskrifter i loggar från COSMIC för att undersöka om det är möjligt att upptäcka anomalier med hjälp av retrospektivdata. Detta examensarbete ger även en inblick i möjligheten att visualisera data från loggar och erbjuda en kraftfull sökmotor. Därför kommer vi att fördjupa oss i de tre välkända program som adresserar frågorna i centraliserad loggning: Elasticsearch, Logstash och Kibana. Sammanfattningsvis visar resultatet att det är möjligt att upptäckta anomalier genom att tillämpa statistiska metoder både på retrospektiv- och realtidsdata. / Logs are an important part of any system; it provides an insight into what is happening. One of the biggest trends in the IT industry is analyzing logs and extracting essential information. The information in the logs are valuable resources that can be used to detect anomalies and manage them before it affects the user In this thesis we will dive into the basics of the information retrieval and analyze exceptions in the logs from COSMIC to investigate whether it is feasible to detect anomalies using retrospective data. This thesis also gives an insight into whether it’s possible to visualize data from logs and offer a powerful search engine. Therefore we will dive into the three well known applications that addresses the issues in centralized logging: Elasticsearch, Logstash and Kibana. In summary, our results shows that it’s possible to detected anomalies by applying statistical methods on both in retrospective and real time data.
|
Page generated in 0.0526 seconds