Spelling suggestions: "subject:"event correlation"" "subject:"event borrelation""
1 |
Event handling techniques in high speed networksGardner, Robert David January 2000 (has links)
No description available.
|
2 |
AUTOMATED CAPACITY PLANNING AND SUPPORT FOR ENTERPRISE APPLICATIONSThakkar, Dharmesh 02 February 2009 (has links)
Capacity planning is crucial for successful development of enterprise applications. Capacity planning activities are most consequential during the verification and maintenance phases of Software Development Life Cycle. During the verification phase, analysts need to execute a large number of performance tests to build accurate performance models. Performance models help customers in capacity planning for their deployments. To build valid performance models, the performance tests must be redone for every release or build of an application. This is a time-consuming and error-prone manual process, which needs tools and techniques to speed up the process. In the maintenance phase, when customers run into performance and capacity related issues after deployment, they commonly engage the vendor of the application for troubleshooting and fine tuning of the troubled deployments. At the end of the engagement, analysts create engagement report, which contain valuable information about the observed symptoms, attempted workarounds, identified problems, and the final solutions. Engagement reports are stored in a customer engagement repository. While information stored in the engagement reports is valuable in helping analysts with future engagements, no systematic techniques exist to retrieve relevant reports from such a repository.
In this thesis we present a framework for the systematic and automated building of capacity calculators during software verification phase. Then, we present a technique to retrieve relevant reports from a customer engagement repository. Our technique helps analyst fix performance and capacity related issues in the maintenance phase by providing easy access to information from relevant reports. We demonstrate our contributions with case studies on an open-source benchmarking application and an enterprise application. / Thesis (Master, Computing) -- Queen's University, 2009-01-29 14:14:37.235
|
3 |
Using MapReduce to scale event correlation discovery for process miningReguieg, Hicham 19 February 2014 (has links) (PDF)
The volume of data related to business process execution is increasing significantly in the enterprise. Many of data sources include events related to the execution of the same processes in various systems or applications. Event correlation is the task of analyzing a repository of event logs in order to find out the set of events that belong to the same business process execution instance. This is a key step in the discovery of business processes from event execution logs. Event correlation is a computationally-intensive task in the sense that it requires a deep analysis of very large and growing repositories of event logs, and exploration of various possible relationships among the events. In this dissertation, we present a scalable data analysis technique to support efficient event correlation for mining business processes. We propose a two-stages approach to compute correlation conditions and their entailed process instances from event logs using MapReduce framework. The experimental results show that the algorithm scales well to large datasets.
|
4 |
[en] AN ARCHITECTURE FOR REAL TIME LOG EVENTS PROCESSING / [pt] UMA ARQUITETURA PARA PROCESSAMENTO DE EVENTOS DE LOG EM TEMPO REALRICARDO GOMES CLEMENTE 10 December 2008 (has links)
[pt] Logs são, atualmente, riquíssima fonte de informação para
administradores
de sistemas e analistas de negócio. Em ambientes com grande
volume de acesso e
infra-estrutura de centenas de servidores, processar toda a
informação gerada e
correlacioná-la com o objetivo de identificar situações de
interesse técnico e de
negócio em tempo real, é considerado um grande desafio.
Nesse sentido, são
explicados tanto os conceitos relacionados aos arquivos de
log e aos sistemas que
se propõem a gerenciá-los, quanto os métodos e ferramentas
de correlação de
eventos em tempo real, para que, então, seja proposta uma
arquitetura de sistema
capaz de lidar com o desafio citado. Por fim, um protótipo
é desenvolvido e uma
prova de conceito baseada em um caso real de uso é
realizada. / [en] Logs are, nowadays, a rich source of information for system
administrators
and business analysts. In environments with a high access
volume and hundreds
of servers, to process every generated information and
correlate it, in order to
identify interesting technical and business situations in
real time, is considered a
challenge. Considering that, concepts related to log files
and systems that aim to
manage it, besides methods and tools for real time event
correlation are presented,
in order to propose a system architecture capable of
overcoming the stated
challenge. At last, a prototype is developed and a concept
prove based on a real
case is done.
|
5 |
Evaluation of and Mitigation against Malicious Traffic in SIP-based VoIP Applications in a Broadband Internet EnvironmentWulff, Tobias January 2010 (has links)
Voice Over IP (VoIP) telephony is becoming widespread, and is often integrated into computer networks. Because of his, it is likely that malicious software will threaten VoIP systems the same way traditional computer systems have been attacked by viruses, worms, and other automated agents. While most users have become familiar with email spam and viruses in email attachments, spam and malicious traffic over telephony currently is a relatively unknown threat. VoIP networks are a challenge to secure against such malware as much of the network intelligence is focused on the edge devices and access environment.
A novel security architecture is being developed which improves the security of a large VoIP network with many inexperienced users, such as non-IT office workers or telecommunication service customers. The new architecture establishes interaction between the VoIP backend and the end users, thus providing information about ongoing and unknown attacks to all users. An evaluation of the effectiveness and performance of different implementations of this architecture is done using virtual machines and network simulation software to emulate vulnerable clients and servers through providing apparent attack vectors.
|
6 |
Modelization and identification of multi-step cyberattacks in sets of events / Modélisation et identification de cyberattaques multi-étapes dans des ensembles d'événementsNavarro Lara, Julio 14 March 2019 (has links)
Une cyberattaque est considérée comme multi-étapes si elle est composée d’au moins deux actions différentes. L’objectif principal de cette thèse est aider l’analyste de sécurité dans la création de modèles de détection à partir d’un ensemble de cas alternatifs d’attaques multi-étapes. Pour répondre à cet objectif, nous présentons quattre contributions de recherche. D’abord, nous avons réalisé la première bibliographie systématique sur la détection d’attaques multi-étapes. Une des conclusions de cette bibliographie est la manque de méthodes pour confirmer les hypothèses formulées par l’analyste de sécurité pendant l’investigation des attaques multi-étapes passées. Ça nous conduit à la deuxième de nos contributions, le graphe des scénarios d’attaques abstrait ou AASG. Dans un AASG, les propositions alternatives sur les étapes fondamentales d’une attaque sont répresentées comme des branches pour être évaluées avec l’arrivée de nouveaux événements. Pour cette évaluation, nous proposons deux modèles, Morwilog et Bidimac, qui font de la détection au même temps que l’identification des hypothèses correctes. L’évaluation des résultats par l’analyste permet l’évolution des modèles.Finalement, nous proposons un modèle pour l’investigation visuel des scénarios d’attaques sur des événements non traités. Ce modèle, qui s’appelle SimSC, est basé sur la similarité entre les adresses IP, en prenant en compte la distance temporelle entre les événements. / A cyberattack is considered as multi-step if it is composed of at least two distinct actions. The main goal of this thesis is to help the security analyst in the creation of detection models from a set of alternative multi-step attack cases. To meet this goal, we present four research contributions. First of all, we have conducted the first systematic survey about multi-step attack detection. One of the conclusions of this survey is the lack of methods to confirm the hypotheses formulated by the security analyst during the investigation of past multi-step attacks. This leads us to the second of our contributions, the Abstract Attack Scenario Graph or AASG. In an AASG, the alternative proposals about the fundamental steps in an attack are represented as branches to be evaluated on new incoming events. For this evaluation, we propose two models, Morwilog and Bidimac, which perform detection and identification of correct hypotheses. The evaluation of the results by the analyst allows the evolution of the models. Finally, we propose a model for the visual investigation of attack scenarios in non-processed events. This model, called SimSC, is based on IP address similarity, considering the temporal distance between the events.
|
7 |
Abstracting and correlating heterogeneous events to detect complex scenariosPanichprecha, Sorot January 2009 (has links)
The research presented in this thesis addresses inherent problems in signaturebased intrusion detection systems (IDSs) operating in heterogeneous environments. The research proposes a solution to address the difficulties associated with multistep attack scenario specification and detection for such environments. The research has focused on two distinct problems: the representation of events derived from heterogeneous sources and multi-step attack specification and detection. The first part of the research investigates the application of an event abstraction model to event logs collected from a heterogeneous environment. The event abstraction model comprises a hierarchy of events derived from different log sources such as system audit data, application logs, captured network traffic, and intrusion detection system alerts. Unlike existing event abstraction models where low-level information may be discarded during the abstraction process, the event abstraction model presented in this work preserves all low-level information as well as providing high-level information in the form of abstract events. The event abstraction model presented in this work was designed independently of any particular IDS and thus may be used by any IDS, intrusion forensic tools, or monitoring tools. The second part of the research investigates the use of unification for multi-step attack scenario specification and detection. Multi-step attack scenarios are hard to specify and detect as they often involve the correlation of events from multiple sources which may be affected by time uncertainty. The unification algorithm provides a simple and straightforward scenario matching mechanism by using variable instantiation where variables represent events as defined in the event abstraction model. The third part of the research looks into the solution to address time uncertainty. Clock synchronisation is crucial for detecting multi-step attack scenarios which involve logs from multiple hosts. Issues involving time uncertainty have been largely neglected by intrusion detection research. The system presented in this research introduces two techniques for addressing time uncertainty issues: clock skew compensation and clock drift modelling using linear regression. An off-line IDS prototype for detecting multi-step attacks has been implemented. The prototype comprises two modules: implementation of the abstract event system architecture (AESA) and of the scenario detection module. The scenario detection module implements our signature language developed based on the Python programming language syntax and the unification-based scenario detection engine. The prototype has been evaluated using a publicly available dataset of real attack traffic and event logs and a synthetic dataset. The distinct features of the public dataset are the fact that it contains multi-step attacks which involve multiple hosts with clock skew and clock drift. These features allow us to demonstrate the application and the advantages of the contributions of this research. All instances of multi-step attacks in the dataset have been correctly identified even though there exists a significant clock skew and drift in the dataset. Future work identified by this research would be to develop a refined unification algorithm suitable for processing streams of events to enable an on-line detection. In terms of time uncertainty, identified future work would be to develop mechanisms which allows automatic clock skew and clock drift identification and correction. The immediate application of the research presented in this thesis is the framework of an off-line IDS which processes events from heterogeneous sources using abstraction and which can detect multi-step attack scenarios which may involve time uncertainty.
|
8 |
Probabilistic Fault Management in Networked SystemsSteinert, Rebecca January 2014 (has links)
Technical advances in network communication systems (e.g. radio access networks) combined with evolving concepts based on virtualization (e.g. clouds), require new management algorithms in order to handle the increasing complexity in the network behavior and variability in the network environment. Current network management operations are primarily centralized and deterministic, and are carried out via automated scripts and manual interventions, which work for mid-sized and fairly static networks. The next generation of communication networks and systems will be of significantly larger size and complexity, and will require scalable and autonomous management algorithms in order to meet operational requirements on reliability, failure resilience, and resource-efficiency. A promising approach to address these challenges includes the development of probabilistic management algorithms, following three main design goals. The first goal relates to all aspects of scalability, ranging from efficient usage of network resources to computational efficiency. The second goal relates to adaptability in maintaining the models up-to-date for the purpose of accurately reflecting the network state. The third goal relates to reliability in the algorithm performance in the sense of improved performance predictability and simplified algorithm control. This thesis is about probabilistic approaches to fault management that follow the concepts of probabilistic network management (PNM). An overview of existing network management algorithms and methods in relation to PNM is provided. The concepts of PNM and the implications of employing PNM-algorithms are presented and discussed. Moreover, some of the practical differences of using a probabilistic fault detection algorithm compared to a deterministic method are investigated. Further, six probabilistic fault management algorithms that implement different aspects of PNM are presented. The algorithms are highly decentralized, adaptive and autonomous, and cover several problem areas, such as probabilistic fault detection and controllable detection performance; distributed and decentralized change detection in modeled link metrics; root-cause analysis in virtual overlays; event-correlation and pattern mining in data logs; and, probabilistic failure diagnosis. The probabilistic models (for a large part based on Bayesian parameter estimation) are memory-efficient and can be used and re-used for multiple purposes, such as performance monitoring, detection, and self-adjustment of the algorithm behavior. / <p>QC 20140509</p>
|
9 |
Using MapReduce to scale event correlation discovery for process mining / Utilisation de MapReduce pour le passage à l'échelle de la corrélation des événements métiers dans le contexte de fouilles de processusReguieg, Hicham 19 February 2014 (has links)
Le volume des données relatives à l'exécution des processus métiers augmente de manière significative dans l'entreprise. Beaucoup de sources de données comprennent les événements liés à l'exécution des mêmes processus dans différents systèmes ou applications. La corrélation des événements est la tâche de l'analyse d'un référentiel de journaux d'événements afin de trouver l'ensemble des événements qui appartiennent à la même trace d'exécution du processus métier. Il s'agit d'une étape clé dans la découverte des processus à partir de journaux d'événements d'exécution. La corrélation des événements est une tâche de calcul intensif dans le sens où elle nécessite une analyse approfondie des relations entre les événements dans des dépôts très grande et qui évolue de plus en plus, et l'exploration de différentes relations possibles entre ces événements. Dans cette thèse, nous présentons une technique d'analyse de données évolutives pour soutenir d'une manière efficace la corrélation des événements pour les fouilles des processus métiers. Nous proposons une approche en deux étapes pour calculer les conditions de corrélation et héritier entraîné des instances de processus de journaux d'événements en utilisant la plateforme MapReduce. Les résultats expérimentaux montrent que l'algorithme s'adapte parfaitement à de grands ensembles de données. / The volume of data related to business process execution is increasing significantly in the enterprise. Many of data sources include events related to the execution of the same processes in various systems or applications. Event correlation is the task of analyzing a repository of event logs in order to find out the set of events that belong to the same business process execution instance. This is a key step in the discovery of business processes from event execution logs. Event correlation is a computationally-intensive task in the sense that it requires a deep analysis of very large and growing repositories of event logs, and exploration of various possible relationships among the events. In this dissertation, we present a scalable data analysis technique to support efficient event correlation for mining business processes. We propose a two-stages approach to compute correlation conditions and their entailed process instances from event logs using MapReduce framework. The experimental results show that the algorithm scales well to large datasets.
|
10 |
Tratamento de eventos em redes elétricas: uma ferramenta. / Treatment of events in electrical networks: a tool.DUARTE, Alexandre Nóbrega. 15 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-15T14:16:38Z
No. of bitstreams: 1
ALEXANDRE NÓBREGA DUARTE - DISSERTAÇÃO PPGCC 2003..pdf: 1526817 bytes, checksum: dfc39cd8b1649bf64468cbe2eaefe99b (MD5) / Made available in DSpace on 2018-08-15T14:16:38Z (GMT). No. of bitstreams: 1
ALEXANDRE NÓBREGA DUARTE - DISSERTAÇÃO PPGCC 2003..pdf: 1526817 bytes, checksum: dfc39cd8b1649bf64468cbe2eaefe99b (MD5)
Previous issue date: 2003-02-25 / Apresenta uma nova ferramenta para o diagnóstico automático de falhas em redes elétricas. A ferramenta utiliza uma técnica híbrida de correlação de eventos criada especialmente para ser utilizada em redes com constantes modificações de topologia. A técnica híbrida combina o raciocínio baseado em regras com o raciocínio baseado em modelos para eliminar as principais limitações do raciocínio baseado em regras. Com a ferramenta de diagnóstico foi possível validar o conhecimento dos especialistas em sistemas de transmissão de energia elétrica necessário para o diagnóstico de falhas em linhas de transmissão e construir uma base de regras para tal. A ferramenta foi testada no diagnóstico de falhas em linhas de transmissão de um dos cinco centros regionais da Companhia Hidro Elétrica do São Francisco (CHESF) e apresentou resultados satisfatórios de desempenho e precisão. / It presents a new tool for the automatic diagnosis of faults in electric networks. The toot uses a hybrid event correlation technique especially created to be used in networks with constant topological modifications. The hybrid technique combines ruJe-based reasoning with modelbased reasoning to eliminate the main limitations of rule-based reasoning. With the tool it was possible to validate the knowledge acquired from electric energy transmission systems specialists needed for the diagnosis of faults in transmission lines and to construct rules. The tool was tested in the diagnosis of faults in transmission lines of one of the five regional centers of the Companhia Hidro Elétrica do São Francisco (CHESF) and presented satisfactoiy results in terms of performance and precision.
|
Page generated in 0.116 seconds