• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 4
  • Tagged with
  • 15
  • 15
  • 15
  • 15
  • 8
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Graph-based Analysis of Dynamic Systems

Schiller, Benjamin 23 November 2017 (has links) (PDF)
The analysis of dynamic systems provides insights into their time-dependent characteristics. This enables us to monitor, evaluate, and improve systems from various areas. They are often represented as graphs that model the system's components and their relations. The analysis of the resulting dynamic graphs yields great insights into the system's underlying structure, its characteristics, as well as properties of single components. The interpretation of these results can help us understand how a system works and how parameters influence its performance. This knowledge supports the design of new systems and the improvement of existing ones. The main issue in this scenario is the performance of analyzing the dynamic graph to obtain relevant properties. While various approaches have been developed to analyze dynamic graphs, it is not always clear which one performs best for the analysis of a specific graph. The runtime also depends on many other factors, including the size and topology of the graph, the frequency of changes, and the data structures used to represent the graph in memory. While the benefits and drawbacks of many data structures are well-known, their runtime is hard to predict when used for the representation of dynamic graphs. Hence, tools are required to benchmark and compare different algorithms for the computation of graph properties and data structures for the representation of dynamic graphs in memory. Based on deeper insights into their performance, new algorithms can be developed and efficient data structures can be selected. In this thesis, we present four contributions to tackle these problems: A benchmarking framework for dynamic graph analysis, novel algorithms for the efficient analysis of dynamic graphs, an approach for the parallelization of dynamic graph analysis, and a novel paradigm to select and adapt graph data structures. In addition, we present three use cases from the areas of social, computer, and biological networks to illustrate the great insights provided by their graph-based analysis. We present a new benchmarking framework for the analysis of dynamic graphs, the Dynamic Network Analyzer (DNA). It provides tools to benchmark and compare different algorithms for the analysis of dynamic graphs as well as the data structures used to represent them in memory. DNA supports the development of new algorithms and the automatic verification of their results. Its visualization component provides different ways to represent dynamic graphs and the results of their analysis. We introduce three new stream-based algorithms for the analysis of dynamic graphs. We evaluate their performance on synthetic as well as real-world dynamic graphs and compare their runtimes to snapshot-based algorithms. Our results show great performance gains for all three algorithms. The new stream-based algorithm StreaM_k, which counts the frequencies of k-vertex motifs, achieves speedups up to 19,043 x for synthetic and 2882 x for real-world datasets. We present a novel approach for the distributed processing of dynamic graphs, called parallel Dynamic Graph Analysis (pDNA). To analyze a dynamic graph, the work is distributed by a partitioner that creates subgraphs and assigns them to workers. They compute the properties of their respective subgraph using standard algorithms. Their results are used by the collator component to merge them to the properties of the original graph. We evaluate the performance of pDNA for the computation of five graph properties on two real-world dynamic graphs with up to 32 workers. Our approach achieves great speedups, especially for the analysis of complex graph measures. We introduce two novel approaches for the selection of efficient graph data structures. The compile-time approach estimates the workload of an analysis after an initial profiling phase and recommends efficient data structures based on benchmarking results. It achieves speedups of up to 5.4 x over baseline data structure configurations for the analysis of real-word dynamic graphs. The run-time approach monitors the workload during analysis and exchanges the graph representation if it finds a configuration that promises to be more efficient for the current workload. Compared to baseline configurations, it achieves speedups up to 7.3 x for the analysis of a synthetic workload. Our contributions provide novel approaches for the efficient analysis of dynamic graphs and tools to further investigate the trade-offs between different factors that influence the performance.
12

Physical Layer Security vs. Network Layer Secrecy: Who Wins on the Untrusted Two-Way Relay Channel?

Richter, Johannes, Franz, Elke, Engelmann, Sabrina, Pfennig, Stefan, Jorswieck, Eduard A. 07 July 2014 (has links) (PDF)
We consider the problem of secure communications in a Gaussian two-way relay network where two nodes exchange confidential messages only via an untrusted relay. The relay is assumed to be honest but curious, i.e., an eavesdropper that conforms to the system rules and applies the intended relaying scheme. We analyze the achievable secrecy rates by applying network coding on the physical layer or the network layer and compare the results in terms of complexity, overhead, and efficiency. Further, we discuss the advantages and disadvantages of the respective approaches.
13

Community-Based Intrusion Detection

Weigert, Stefan 06 February 2017 (has links) (PDF)
Today, virtually every company world-wide is connected to the Internet. This wide-spread connectivity has given rise to sophisticated, targeted, Internet-based attacks. For example, between 2012 and 2013 security researchers counted an average of about 74 targeted attacks per day. These attacks are motivated by economical, financial, or political interests and commonly referred to as “Advanced Persistent Threat (APT)” attacks. Unfortunately, many of these attacks are successful and the adversaries manage to steal important data or disrupt vital services. Victims are preferably companies from vital industries, such as banks, defense contractors, or power plants. Given that these industries are well-protected, often employing a team of security specialists, the question is: How can these attacks be so successful? Researchers have identified several properties of APT attacks which make them so efficient. First, they are adaptable. This means that they can change the way they attack and the tools they use for this purpose at any given moment in time. Second, they conceal their actions and communication by using encryption, for example. This renders many defense systems useless as they assume complete access to the actual communication content. Third, their actions are stealthy — either by keeping communication to the bare minimum or by mimicking legitimate users. This makes them “fly below the radar” of defense systems which check for anomalous communication. And finally, with the goal to increase their impact or monetisation prospects, their attacks are targeted against several companies from the same industry. Since months can pass between the first attack, its detection, and comprehensive analysis, it is often too late to deploy appropriate counter-measures at businesses peers. Instead, it is much more likely that they have already been attacked successfully. This thesis tries to answer the question whether the last property (industry-wide attacks) can be used to detect such attacks. It presents the design, implementation and evaluation of a community-based intrusion detection system, capable of protecting businesses at industry-scale. The contributions of this thesis are as follows. First, it presents a novel algorithm for community detection which can detect an industry (e.g., energy, financial, or defense industries) in Internet communication. Second, it demonstrates the design, implementation, and evaluation of a distributed graph mining engine that is able to scale with the throughput of the input data while maintaining an end-to-end latency for updates in the range of a few milliseconds. Third, it illustrates the usage of this engine to detect APT attacks against industries by analyzing IP flow information from an Internet service provider. Finally, it introduces a detection algorithm- and input-agnostic intrusion detection engine which supports not only intrusion detection on IP flow but any other intrusion detection algorithm and data-source as well.
14

Automatic Hardening against Dependability and Security Software Bugs / Automatisches Härten gegen Zuverlässigkeits- und Sicherheitssoftwarefehler

Süßkraut, Martin 15 June 2010 (has links) (PDF)
It is a fact that software has bugs. These bugs can lead to failures. Especially dependability and security failures are a great threat to software users. This thesis introduces four novel approaches that can be used to automatically harden software at the user's site. Automatic hardening removes bugs from already deployed software. All four approaches are automated, i.e., they require little support from the end-user. However, some support from the software developer is needed for two of these approaches. The presented approaches can be grouped into error toleration and bug removal. The two error toleration approaches are focused primarily on fast detection of security errors. When an error is detected it can be tolerated with well-known existing approaches. The other two approaches are bug removal approaches. They remove dependability bugs from already deployed software. We tested all approaches with existing benchmarks and applications, like the Apache web-server.
15

Beschreibung, Verarbeitung und Überprüfung clientseitiger Policies für vertrauenswürdige Cloud-Anwendungen

Kebbedies, Jörg 08 February 2018 (has links) (PDF)
Für Geschäftsbereiche mit hohen Anforderungen an Vertraulichkeit und Datenschutz zur Verarbeitung ihrer sensitiven Informationen kann für die Nutzung von Public-Cloud-Technologien keine Benutzerakzeptanz ausgewiesen werden. Die Ursachen dafür erwachsen aus dem inhärenten Strukturkonzept verteilter, begrenzter Verantwortlichkeiten und einem fehlenden Cloud-Anwender-Vertrauen. Die vorliegende Arbeit verfolgt ein Cloud-Anwender orientiertes Vorgehen zur Durchsetzung regelnder Policy-Konzepte, kombiniert mit einem holistischen Ansatz zur Herstellung einer durchgehenden Vertrauensbasis. Der Aspekt Vertrauen erhält eine eigenständige Konzeptualisierung und wird zu einem Cloud-Anwender-Instrument für die Gestaltung vertrauenswürdiger infrastruktureller Eigenschaften entwickelt. Jede weitere Form einer Policy entwickelt ihren verbindlichen regulierenden Wert erst durch eine unlösliche Verbindung mit den hier vorgelegten Konzepten vertrauenswürdiger Entitäten. Ein ontologisch formalisierter Beschreibungsansatz vollzieht die für eine Regulierung notwendige Konzeptualisierung einer domänenspezifischen IT-Architektur und qualifizierender Sicherheitseigenschaften. Eigenständige Konzeptklassen für die Regulierung liefern den Beschreibungsrahmen zur Ableitung integrierter Trust-Policies. Darauf aufbauende Domänenmodelle repräsentieren eine vom Cloud-Anwender definierte Erwartung in Bezug auf ein reguliertes Cloud-Architektur-Design und reflektieren die reale Welt auf Grundlage vertrauenswürdiger Fakten. Vertrauen quantifiziert sich im Ergebnis logischer Schlussfolgerungen und ist Ausdruck zugesicherter Cloud-Sicherheitseigenschaften und geregelter Verhaltensformen.

Page generated in 0.0234 seconds