• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 3
  • 2
  • Tagged with
  • 59
  • 59
  • 55
  • 55
  • 54
  • 54
  • 54
  • 54
  • 54
  • 54
  • 54
  • 54
  • 54
  • 44
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Secret key generation from reciprocal spatially correlated MIMO channels

Jorswieck, Eduard A., Wolf, Anne, Engelmann, Sabrina 16 June 2014 (has links) (PDF)
Secret key generation from reciprocal multi-antenna channels is an interesting alternative to cryptographic key management in wireless systems without infrastructure access. In this work, we study the secret key rate for the basic source model with a MIMO channel. First, we derive an expression for the secret key rate under spatial correlation modelled by the Kronecker model and with spatial precoding at both communication nodes. Next, we analyze the result for uncorrelated antennas to understand the optimal precoding for this special case, which is equal power allocation. Then, the impact of correlation is characterized using Majorization theory. Surprisingly for small SNR, spatial correlation increases the secret key rate. For high SNR, the maximum secret key rate is achieved for uncorrelated antennas. The results indicate that a solid system design for reciprocal MIMO key generation is required to establish the secret key rate gains.
22

Newsletter für Freunde, Absolventen und Ehemalige der Technischen Universität Chemnitz 1/2010

Steinebach, Mario, Thehos, Katharina 31 March 2010 (has links) (PDF)
viermal im Jahr erscheinender Newsletter für Freunde, Absolventen und Ehemalige der TU Chemnitz
23

Physical Layer Security vs. Network Layer Secrecy: Who Wins on the Untrusted Two-Way Relay Channel?

Richter, Johannes, Franz, Elke, Engelmann, Sabrina, Pfennig, Stefan, Jorswieck, Eduard A. January 2013 (has links)
We consider the problem of secure communications in a Gaussian two-way relay network where two nodes exchange confidential messages only via an untrusted relay. The relay is assumed to be honest but curious, i.e., an eavesdropper that conforms to the system rules and applies the intended relaying scheme. We analyze the achievable secrecy rates by applying network coding on the physical layer or the network layer and compare the results in terms of complexity, overhead, and efficiency. Further, we discuss the advantages and disadvantages of the respective approaches.
24

Comparison of Different Secure Network Coding Paradigms Concerning Transmission Efficiency

Pfennig, Stefan, Franz, Elke January 2013 (has links)
Preventing the success of active attacks is of essential importance for network coding since even the infiltration of one single corrupted data packet can jam large parts of the network. The existing approaches for network coding schemes preventing such pollution attacks can be divided into two categories: utilize cryptographic approaches or utilize redundancy similar to error correction coding. Within this paper, we compared both paradigms concerning efficiency of data transmission under various circumstances. Particularly, we considered an attacker of a certain strength as well as the influence of the generation size. The results are helpful for selecting a suitable approach for network coding taking into account both security against pollution attacks and efficiency.
25

A high-throughput in-memory index, durable on flash-based SSD: Insights into the winning solution of the SIGMOD programming contest 2011

Kissinger, Thomas, Schlegel, Benjamin, Böhm, Matthias, Habich, Dirk, Lehner, Wolfgang January 2012 (has links)
Growing memory capacities and the increasing number of cores on modern hardware enforces the design of new in-memory indexing structures that reduce the number of memory transfers and minimizes the need for locking to allow massive parallel access. However, most applications depend on hard durability constraints requiring a persistent medium like SSDs, which shorten the latency and throughput gap between main memory and hard disks. In this paper, we present our winning solution of the SIGMOD Programming Contest 2011. It consists of an in-memory indexing structure that provides a balanced read/write performance as well as non-blocking reads and single-lock writes. Complementary to this index, we describe an SSD-optimized logging approach to fit hard durability requirements at a high throughput rate.
26

Measuring energy consumption for short code paths using RAPL

Hähnel, Marcus, Döbel, Björn, Völp, Marcus, Härtig, Hermann 28 May 2013 (has links)
Measuring the energy consumption of software components is a major building block for generating models that allow for energy-aware scheduling, accounting and budgeting. Current measurement techniques focus on coarse-grained measurements of application or system events. However, fine grain adjustments in particular in the operating-system kernel and in application-level servers require power profiles at the level of a single software function. Until recently, this appeared to be impossible due to the lacking fine grain resolution and high costs of measurement equipment. In this paper we report on our experience in using the Running Average Power Limit (RAPL) energy sensors available in recent Intel CPUs for measuring energy consumption of short code paths. We investigate the granularity at which RAPL measurements can be performed and discuss practical obstacles that occur when performing these measurements on complex modern CPUs. Furthermore, we demonstrate how to use the RAPL infrastructure to characterize the energy costs for decoding video slices.
27

Energy-Efficient In-Memory Database Computing

Lehner, Wolfgang January 2013 (has links)
The efficient and flexible management of large datasets is one of the core requirements of modern business applications. Having access to consistent and up-to-date information is the foundation for operational, tactical, and strategic decision making. Within the last few years, the database community sparked a large number of extremely innovative research projects to push the envelope in the context of modern database system architectures. In this paper, we outline requirements and influencing factors to identify some of the hot research topics in database management systems. We argue that—even after 30 years of active database research—the time is right to rethink some of the core architectural principles and come up with novel approaches to meet the requirements of the next decades in data management. The sheer number of diverse and novel (e.g., scientific) application areas, the existence of modern hardware capabilities, and the need of large data centers to become more energy-efficient will be the drivers for database research in the years to come.
28

Wireless Interconnect for Board and Chip Level

Fettweis, Gerhard P., ul Hassan, Najeeb, Landau, Lukas, Fischer, Erik January 2013 (has links)
Electronic systems of the future require a very high bandwidth communications infrastructure within the system. This way the massive amount of compute power which will be available can be inter-connected to realize future powerful advanced electronic systems. Today, electronic inter-connects between 3D chip-stacks, as well as intra-connects within 3D chip-stacks are approaching data rates of 100 Gbit/s soon. Hence, the question to be answered is how to efficiently design the communications infrastructure which will be within electronic systems. Within this paper approaches and results for building this infrastructure for future electronics are addressed.
29

Waiting for Locks: How Long Does It Usually Take?

Baier, Christel, Daum, Marcus, Engel, Benjamin, Härtig, Hermann, Klein, Joachim, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus January 2012 (has links)
Reliability of low-level operating-system (OS) code is an indispensable requirement. This includes functional properties from the safety-liveness spectrum, but also quantitative properties stating, e.g., that the average waiting time on locks is sufficiently small or that the energy requirement of a certain system call is below a given threshold with a high probability. This paper reports on our experiences made in a running project where the goal is to apply probabilistic model checking techniques and to align the results of the model checker with measurements to predict quantitative properties of low-level OS code.
30

Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

Baier, Christel, Daum, Marcus, Engel, Benjamin, Härtig, Hermann, Klein, Joachim, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus January 2012 (has links)
Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS) code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.

Page generated in 0.2417 seconds