• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 10
  • 7
  • 4
  • 4
  • 1
  • Tagged with
  • 88
  • 88
  • 35
  • 31
  • 26
  • 25
  • 24
  • 17
  • 17
  • 13
  • 11
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Robust Optimization of Private Communication in Multi-Antenna Systems

Wolf, Anne 02 June 2015 (has links)
The thesis focuses on the privacy of communication that can be ensured by means of the physical layer, i.e., by appropriately chosen coding and resource allocation schemes. The fundamentals of physical-layer security have been already formulated in the 1970s by Wyner (1975), Csiszár and Körner (1978). But only nowadays we have the technical progress such that these ideas can find their way in current and future communication systems, which has driven the growing interest in this area of research in the last years. We analyze two physical-layer approaches that can ensure the secret transmission of private information in wireless systems in presence of an eavesdropper. One is the direct transmission of the information to the intended receiver, where the transmitter has to simultaneously ensure the reliability and the secrecy of the information. The other is a two-phase approach, where two legitimated users first agree on a common and secret key, which they use afterwards to encrypt the information before it is transmitted. In this case, the secrecy and the reliability of the transmission are managed separately in the two phases. The secrecy of the transmitted messages mainly depends on reliable information or reasonable and justifiable assumptions about the channel to the potential eavesdropper. Perfect state information about the channel to a passive eavesdropper is not a rational assumption. Thus, we introduce a deterministic model for the uncertainty about this channel, which yields a set of possible eavesdropper channels. We consider the optimization of worst-case rates in systems with multi-antenna Gaussian channels for both approaches. We study which transmit strategy can yield a maximum rate if we assume that the eavesdropper can always observe the corresponding worst-case channel that reduces the achievable rate for the secret transmission to a minimum. For both approaches, we show that the resulting max-min problem over the matrices that describe the multi-antenna system can be reduced to an equivalent problem over the eigenvalues of these matrices. We characterize the optimal resource allocation under a sum power constraint over all antennas and derive waterfilling solutions for the corresponding worst-case channel to the eavesdropper for a constraint on the sum of all channel gains. We show that all rates converge to finite limits for high signal-to-noise ratios (SNR), if we do not restrict the number of antennas for the eavesdropper. These limits are characterized by the quotients of the eigenvalues resulting from the Gramian matrices of both channels. For the low-SNR regime, we observe a rate increase that depends only on the differences of these eigenvalues for the direct-transmission approach. For the key generation approach, there exists no dependence from the eavesdropper channel in this regime. The comparison of both approaches shows that the superiority of an approach over the other mainly depends on the SNR and the quality of the eavesdropper channel. The direct-transmission approach is advantageous for low SNR and comparably bad eavesdropper channels, whereas the key generation approach benefits more from high SNR and comparably good eavesdropper channels. All results are discussed in combination with numerous illustrations. / Der Fokus dieser Arbeit liegt auf der Abhörsicherheit der Datenübertragung, die auf der Übertragungsschicht, also durch geeignete Codierung und Ressourcenverteilung, erreicht werden kann. Die Grundlagen der Sicherheit auf der Übertragungsschicht wurden bereits in den 1970er Jahren von Wyner (1975), Csiszár und Körner (1978) formuliert. Jedoch ermöglicht erst der heutige technische Fortschritt, dass diese Ideen in zukünftigen Kommunikationssystemen Einzug finden können. Dies hat in den letzten Jahren zu einem gestiegenen Interesse an diesem Forschungsgebiet geführt. In der Arbeit werden zwei Ansätze zur abhörsicheren Datenübertragung in Funksystemen analysiert. Dies ist zum einen die direkte Übertragung der Information zum gewünschten Empfänger, wobei der Sender gleichzeitig die Zuverlässigkeit und die Abhörsicherheit der Übertragung sicherstellen muss. Zum anderen wird ein zweistufiger Ansatz betrachtet: Die beiden Kommunikationspartner handeln zunächst einen gemeinsamen sicheren Schlüssel aus, der anschließend zur Verschlüsselung der Datenübertragung verwendet wird. Bei diesem Ansatz werden die Abhörsicherheit und die Zuverlässigkeit der Information getrennt voneinander realisiert. Die Sicherheit der Nachrichten hängt maßgeblich davon ab, inwieweit zuverlässige Informationen oder verlässliche Annahmen über den Funkkanal zum Abhörer verfügbar sind. Die Annahme perfekter Kanalkenntnis ist für einen passiven Abhörer jedoch kaum zu rechtfertigen. Daher wird hier ein deterministisches Modell für die Unsicherheit über den Kanal zum Abhörer eingeführt, was zu einer Menge möglicher Abhörkanäle führt. Die Optimierung der sogenannten Worst-Case-Rate in einem Mehrantennensystem mit Gaußschem Rauschen wird für beide Ansätze betrachtet. Es wird analysiert, mit welcher Sendestrategie die maximale Rate erreicht werden kann, wenn gleichzeitig angenommen wird, dass der Abhörer den zugehörigen Worst-Case-Kanal besitzt, welcher die Rate der abhörsicheren Kommunikation jeweils auf ein Minimum reduziert. Für beide Ansätze wird gezeigt, dass aus dem resultierenden Max-Min-Problem über die Matrizen des Mehrantennensystems ein äquivalentes Problem über die Eigenwerte der Matrizen abgeleitet werden kann. Die optimale Ressourcenverteilung für eine Summenleistungsbeschränkung über alle Sendeantennen wird charakterisiert. Für den jeweiligen Worst-Case-Kanal zum Abhörer, dessen Kanalgewinne einer Summenbeschränkung unterliegen, werden Waterfilling-Lösungen hergeleitet. Es wird gezeigt, dass für hohen Signal-Rausch-Abstand (engl. signal-to-noise ratio, SNR) alle Raten gegen endliche Grenzwerte konvergieren, wenn die Antennenzahl des Abhörers nicht beschränkt ist. Die Grenzwerte werden durch die Quotienten der Eigenwerte der Gram-Matrizen beider Kanäle bestimmt. Für den Ratenanstieg der direkten Übertragung ist bei niedrigem SNR nur die Differenz dieser Eigenwerte maßgeblich, wohingegen für den Verschlüsselungsansatz in dem Fall keine Abhängigkeit vom Kanal des Abhörers besteht. Ein Vergleich zeigt, dass das aktuelle SNR und die Qualität des Abhörkanals den einen oder anderen Ansatz begünstigen. Die direkte Übertragung ist bei niedrigem SNR und verhältnismäßig schlechten Abhörkanälen überlegen, wohingegen der Verschlüsselungsansatz von hohem SNR und vergleichsweise guten Abhörkanälen profitiert. Die Ergebnisse der Arbeit werden umfassend diskutiert und illustriert.
82

Efficient Algorithms for the Computation of Optimal Quadrature Points on Riemannian Manifolds

Gräf, Manuel 30 May 2013 (has links)
We consider the problem of numerical integration, where one aims to approximate an integral of a given continuous function from the function values at given sampling points, also known as quadrature points. A useful framework for such an approximation process is provided by the theory of reproducing kernel Hilbert spaces and the concept of the worst case quadrature error. However, the computation of optimal quadrature points, which minimize the worst case quadrature error, is in general a challenging task and requires efficient algorithms, in particular for large numbers of points. The focus of this thesis is on the efficient computation of optimal quadrature points on the torus T^d, the sphere S^d, and the rotation group SO(3). For that reason we present a general framework for the minimization of the worst case quadrature error on Riemannian manifolds, in order to construct numerically such quadrature points. Therefore, we consider, for N quadrature points on a manifold M, the worst case quadrature error as a function defined on the product manifold M^N. For the optimization on such high dimensional manifolds we make use of the method of steepest descent, the Newton method, and the conjugate gradient method, where we propose two efficient evaluation approaches for the worst case quadrature error and its derivatives. The first evaluation approach follows ideas from computational physics, where we interpret the quadrature error as a pairwise potential energy. These ideas allow us to reduce for certain instances the complexity of the evaluations from O(M^2) to O(M log(M)). For the second evaluation approach we express the worst case quadrature error in Fourier domain. This enables us to utilize the nonequispaced fast Fourier transforms for the torus T^d, the sphere S^2, and the rotation group SO(3), which reduce the computational complexity of the worst case quadrature error for polynomial spaces with degree N from O(N^k M) to O(N^k log^2(N) + M), where k is the dimension of the corresponding manifold. For the usual choice N^k ~ M we achieve the complexity O(M log^2(M)) instead of O(M^2). In conjunction with the proposed conjugate gradient method on Riemannian manifolds we arrive at a particular efficient optimization approach for the computation of optimal quadrature points on the torus T^d, the sphere S^d, and the rotation group SO(3). Finally, with the proposed optimization methods we are able to provide new lists with quadrature formulas for high polynomial degrees N on the sphere S^2, and the rotation group SO(3). Further applications of the proposed optimization framework are found due to the interesting connections between worst case quadrature errors, discrepancies and potential energies. Especially, discrepancies provide us with an intuitive notion for describing the uniformity of point distributions and are of particular importance for high dimensional integration in quasi-Monte Carlo methods. A generalized form of uniform point distributions arises in applications of image processing and computer graphics, where one is concerned with the problem of distributing points in an optimal way accordingly to a prescribed density function. We will show that such problems can be naturally described by the notion of discrepancy, and thus fit perfectly into the proposed framework. A typical application is halftoning of images, where nonuniform distributions of black dots create the illusion of gray toned images. We will see that the proposed optimization methods compete with state-of-the-art halftoning methods.
83

Efficient information leakage neutralization on a relay-assisted multi-carrier interference channel

Ho, Zuleita K.-M., Jorswieck, Eduard A., Engelmann, Sabrina 22 November 2013 (has links) (PDF)
In heterogeneous dense networks where spectrum is shared, users privacy remains one of the major challenges. When the receivers are not only interested in their own signals but also in eavesdropping other users' signals, the cross talk becomes information leakage.We propose a novel and efficient secrecy rate enhancing relay strategy EFFIN for information leakage neutralization. The relay matrix is chosen such that the effective leakage channel (spectral and spatial) is zero. Thus, it ensures secrecy regardless of receive processing employed at eavesdroppers and does not rely on wiretaps codes to ensure secrecy, unlike other physical layer security techniques such as artificial noise. EFFIN achieves a higher sum secrecy rate over several state-of-the-art baseline methods.
84

Information Leakage Neutralization for the Multi-Antenna Non-Regenerative Relay-Assisted Multi-Carrier Interference Channel

Ho, Zuleita, Jorswieck, Eduard, Engelmann, Sabrina 21 October 2013 (has links) (PDF)
In heterogeneous dense networks where spectrum is shared, users' privacy remains one of the major challenges. On a multi-antenna relay-assisted multi-carrier interference channel, each user shares the spectral and spatial resources with all other users. When the receivers are not only interested in their own signals but also in eavesdropping other users' signals, the cross talk on the spectral and spatial channels becomes information leakage. In this paper, we propose a novel secrecy rate enhancing relay strategy that utilizes both spectral and spatial resources, termed as information leakage neutralization. To this end, the relay matrix is chosen such that the effective channel from the transmitter to the colluding eavesdropper is equal to the negative of the effective channel over the relay to the colluding eavesdropper and thus the information leakage to zero. Interestingly, the optimal relay matrix in general is not block-diagonal which encourages users' encoding over the frequency channels. We proposed two information leakage neutralization strategies, namely efficient information leakage neutralization (EFFIN) and local-optimized information leakage neutralization (LOPTIN). EFFIN provides a simple and efficient design of relay processing matrix and precoding matrices at the transmitters in the scenario of limited power and computational resources. LOPTIN, despite its higher complexity, provides a better sum secrecy rate performance by optimizing the relay processing matrix and the precoding matrices jointly. The proposed methods are shown to improve the sum secrecy rates over several state-of-the-art baseline methods.
85

Aide au tolérancement tridimensionnel : modèle des domaines / Three-dimensional tolerancing assistance : domains model

Mansuy, Mathieu 25 June 2012 (has links)
Face à la demande de plus en plus exigeante en terme de qualité et de coût de fabrication des produits manufacturés, la qualification et quantification optimal des défauts acceptables est primordial. Le tolérancement est le moyen de communication permettant de définir les variations géométriques autorisé entre les différents corps de métier intervenant au cours du cycle de fabrication du produit. Un tolérancement optimal est le juste compromis entre coût de fabrication et qualité du produit final. Le tolérancement repose sur 3 problématiques majeures: la spécification (normalisation d'un langage complet et univoque), la synthèse et l'analyse de tolérances. Nous proposons dans ce document de nouvelles méthodes d'analyse et de synthèse du tolérancement tridimensionnel. Ces méthodes se basent sur une modélisation de la géométrie à l'aide de l'outil domaine jeux et écarts développé au laboratoire. La première étape consiste à déterminer les différentes topologies composant un mécanisme tridimensionnel. Pour chacune de ces topologies est définie une méthode de résolution des problématiques de tolérancement. Au pire des cas, les conditions de respect des exigences fonctionnelles se traduisent par des conditions d'existence et d'inclusions sur les domaines. Ces équations de domaines peuvent ensuite être traduites sous forme de système d'inéquations scalaires. L'analyse statistique s'appuie sur des tirages de type Monte-Carlo. Les variables aléatoires sont les composantes de petits déplacements des torseur écarts défini à l'intérieur de leur zone de tolérance (modélisée par un domaine écarts) et les dimensions géométriques fixant l'étendue des jeux (taille du domaine jeux associé). A l'issue des simulations statistiques, il est possible d'estimer le risque de non-qualité et les jeux résiduels en fonction du tolérancement défini. Le développement d'une nouvelle représentation des domaines jeux et écarts plus adapté, permet de simplifier les calculs relatifs aux problématiques de tolérancement. Le traitement local de chaque topologie élémentaire de mécanisme permet d'effectuer le traitement global des mécanismes tridimensionnels complexes avec prise en compte des jeux. / As far as the demand in quality and cost of manufacturing increase, the optimal qualification and quantification of acceptable defects is essential. Tolerancing is the means of communication between all actors of manufacturing. An optimal tolerancing is the right compromise between manufacturing cost and quality of the final product. Tolerancing is based on three major issues: The specification (standardization of a complete and unequivocal language), synthesis and analysis of the tolerancing. We suggest in this thesis some new analysis and synthesis of the three-dimensional tolerancing. These methods are based on a geometric model define by the deviations and clearances domains developed on the laboratory. The first step consists in determining the elementary topology that composes a three-dimensional mechanism. For each kind of these topologies one resolution method is defined. In worst case, the condition of functional requirement respect is traduced by existence and inclusions conditions on the domains. Then these domains equations can be translated in inequalities system of scalar. The statistical analysis uses the Monte-Carlo simulation. The random variables are the small displacements components of the deviation torsor which is defined inside its tolerance area (model by a deviations domain) and the geometrics dimensions which set the extent of clearance (size of the clearance domain). Thanks to statistical simulation, it is possible to estimate the non-quality rate in regards to the defined tolerancing. The development of a new representation of clearances and deviations domains most suitable, allows us to simplify the calculation for tolerancing problems. The local treatment of elementary topology makes enables the global treatment of complex three-dimensional mechanisms with take into account of clearances.
86

Efficient information leakage neutralization on a relay-assisted multi-carrier interference channel

Ho, Zuleita K.-M., Jorswieck, Eduard A., Engelmann, Sabrina January 2013 (has links)
In heterogeneous dense networks where spectrum is shared, users privacy remains one of the major challenges. When the receivers are not only interested in their own signals but also in eavesdropping other users' signals, the cross talk becomes information leakage.We propose a novel and efficient secrecy rate enhancing relay strategy EFFIN for information leakage neutralization. The relay matrix is chosen such that the effective leakage channel (spectral and spatial) is zero. Thus, it ensures secrecy regardless of receive processing employed at eavesdroppers and does not rely on wiretaps codes to ensure secrecy, unlike other physical layer security techniques such as artificial noise. EFFIN achieves a higher sum secrecy rate over several state-of-the-art baseline methods.
87

Information Leakage Neutralization for the Multi-Antenna Non-Regenerative Relay-Assisted Multi-Carrier Interference Channel

Ho, Zuleita, Jorswieck, Eduard, Engelmann, Sabrina January 2013 (has links)
In heterogeneous dense networks where spectrum is shared, users' privacy remains one of the major challenges. On a multi-antenna relay-assisted multi-carrier interference channel, each user shares the spectral and spatial resources with all other users. When the receivers are not only interested in their own signals but also in eavesdropping other users' signals, the cross talk on the spectral and spatial channels becomes information leakage. In this paper, we propose a novel secrecy rate enhancing relay strategy that utilizes both spectral and spatial resources, termed as information leakage neutralization. To this end, the relay matrix is chosen such that the effective channel from the transmitter to the colluding eavesdropper is equal to the negative of the effective channel over the relay to the colluding eavesdropper and thus the information leakage to zero. Interestingly, the optimal relay matrix in general is not block-diagonal which encourages users' encoding over the frequency channels. We proposed two information leakage neutralization strategies, namely efficient information leakage neutralization (EFFIN) and local-optimized information leakage neutralization (LOPTIN). EFFIN provides a simple and efficient design of relay processing matrix and precoding matrices at the transmitters in the scenario of limited power and computational resources. LOPTIN, despite its higher complexity, provides a better sum secrecy rate performance by optimizing the relay processing matrix and the precoding matrices jointly. The proposed methods are shown to improve the sum secrecy rates over several state-of-the-art baseline methods.
88

Two-phase WCET analysis for cache-based symmetric multiprocessor systems

Tsoupidi, Rodothea Myrsini January 2017 (has links)
The estimation of the worst-case execution time (WCET) of a task is a problem that concerns the field of embedded systems and, especially, real-time systems. Estimating a safe WCET for single-core architectures without speculative mechanisms is a challenging task and an active research topic. However, the advent of advanced hardware mechanisms, which often lack predictability, complicates the current WCET analysis methods. The field of Embedded Systems has high safety considerations and is, therefore, conservative with speculative mechanisms. However, nowadays, even safety-critical applications move to the direction of multiprocessor systems. In a multiprocessor system, each task that runs on a processing unit might affect the execution time of the tasks running on different processing units. In shared-memory symmetric multiprocessor systems, this interference occurs through the shared memory and the common bus. The presence of private caches introduces cachecoherence issues that result in further dependencies between the tasks. The purpose of this thesis is twofold: (1) to evaluate the feasibility of an existing one-pass WCET analysis method with an integrated cache analysis and (2) to design and implement a cachebased multiprocessor WCET analysis by extending the singlecore method. The single-core analysis is part of the KTH’s Timing Analysis (KTA) tool. The WCET analysis of KTA uses Abstract Search-based WCET Analysis, an one-pass technique that is based on abstract interpretation. The evaluation of the feasibility of this analysis includes the integration of microarchitecture features, such as cache and pipeline, into KTA. These features are necessary for extending the analysis for hardware models of modern embedded systems. The multiprocessor analysis of this work uses the single-core analysis in two stages to estimate the WCET of a task running under the presence of temporally and spatially interfering tasks. The first phase records the memory accesses of all the temporally interfering tasks, and the second phase uses this information to perform the multiprocessor WCET analysis. The multiprocessor analysis assumes the presence of private caches and a shared communication bus and implements the MESI protocol to maintain cache coherence. / Uppskattning av längsta exekveringstid (eng. worst-case execution time eller WCET) är ett problem som angår inbyggda system och i synnerhet realtidssystem. Att uppskatta en säker WCET för enkelkärniga system utan spekulativa mekanismer är en utmanande uppgift och ett aktuellt forskningsämne. Tillkomsten av avancerade hårdvarumekanismer, som ofta saknar förutsägbarhet, komplicerar ytterligare de nuvarande analysmetoderna för WCET. Inom fältet för inbyggda system ställs höga säkerhetskrav. Således antas en konservativ inställning till nya spekulativa mekanismer. Trotts detta går säkerhetskritiska system mer och mer i riktning mot multiprocessorsystem. I multiprocessorsystem påverkas en process som exekveras på en processorenhet av processer som exekveras på andra processorenheter. I symmetriska multiprocessorsystem med delade minnen påträffas denna interferens i det delade minnet och den gemensamma bussen. Privata minnen introducerar cache-koherens problem som resulterar i ytterligare beroende mellan processerna. Syftet med detta examensarbete är tvåfaldigt: (1) att utvärdera en befintlig analysmetod för WCET efter integrering av en lågnivå analys och (2) att designa och implementera en cache-baserad flerkärnig WCET-analys genom att utvidga denna enkelkärniga metod. Den enkelkärniga metoden är implementerad i KTH’s Timing Analysis (KTA), ett verktyg för tidsanalys. KTA genomför en så-kallad Abstrakt Sök-baserad Metod som är baserad på Abstrakt Interpretation. Utvärderingen av denna analys innefattar integrering av mikroarkitektur mekanismer, såsom cache-minne och pipeline, i KTA. Dessa mekanismer är nödvändiga för att utvidga analysen till att omfatta de hårdvarumodeller som används idag inom fältet för inbyggda system. Den flerkärniga WCET-analysen genomförs i två steg och uppskattar WCET av en process som körs i närvaron av olika tids och rumsligt störande processer. Första steget registrerar minnesåtkomst för alla tids störande processer, medans andra steget använder sig av första stegets information för att utföra den flerkärniga WCET-analysen. Den flerkärniga analysen förutsätter ett system med privata cache-minnen och en gemensamm buss som implementerar MESI protokolen för att upprätthålla cache-koherens.

Page generated in 0.0389 seconds