• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 268
  • 135
  • 54
  • 25
  • 5
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 579
  • 579
  • 160
  • 144
  • 123
  • 116
  • 104
  • 89
  • 73
  • 72
  • 71
  • 69
  • 59
  • 57
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Design and Performance Evaluation of Service Discovery Protocols for Vehicular Networks

Abrougui, Kaouther 28 September 2011 (has links)
Intelligent Transportation Systems (ITS) are gaining momentum among researchers. ITS encompasses several technologies, including wireless communications, sensor networks, data and voice communication, real-time driving assistant systems, etc. These states of the art technologies are expected to pave the way for a plethora of vehicular network applications. In fact, recently we have witnessed a growing interest in Vehicular Networks from both the research community and industry. Several potential applications of Vehicular Networks are envisioned such as road safety and security, traffic monitoring and driving comfort, just to mention a few. It is critical that the existence of convenience or driving comfort services do not negatively affect the performance of safety services. In essence, the dissemination of safety services or the discovery of convenience applications requires the communication among service providers and service requesters through constrained bandwidth resources. Therefore, service discovery techniques for vehicular networks must efficiently use the available common resources. In this thesis, we focus on the design of bandwidth-efficient and scalable service discovery protocols for Vehicular Networks. Three types of service discovery architectures are introduced: infrastructure-less, infrastructure-based, and hybrid architectures. Our proposed algorithms are network layer based where service discovery messages are integrated into the routing messages for a lightweight discovery. Moreover, our protocols use the channel diversity for efficient service discovery. We describe our algorithms and discuss their implementation. Finally, we present the main results of the extensive set of simulation experiments that have been used in order to evaluate their performance.
152

Overlay Neighborhoods for Distributed Publish/Subscribe Systems

Sherafat Kazemzadeh, Reza 07 January 2013 (has links)
The Publish/Subscribe (pub/sub) model has been widely applied in a variety of application scenarios which demand loose-coupling and asynchronous communication between a large number of information sources and sinks. In this model, clients are granted the flexibility to specify their interests at a high level and rely on the pub/sub middleware for delivery of their publications of interest. This increased flexibility and ease of use on the client side results in substantial complexity on part of the pub/sub middleware implementation. Furthermore, for several reasons including improved scalability, availability and avoiding a single point of failure, the pub/sub middleware is commonly composed of a set of collaborating message routers, a.k.a. brokers. The distributed nature of this design further introduces new challenges in ensuring end-to-end reliability as well as efficiency of operation. These challenges are largely unique to the pub/sub model and hence absent in both point-to-point or multicast protocols. This thesis develops solutions that ensure the dependable operation of the pub/sub system by exploiting the notion of overlay neighborhoods in a formal manner. More specifically, brokers maintain information about their neighbors within a configurable distance in the pub/sub overlay and exploit this knowledge to construct alternative forwarding paths or make smart forwarding decisions that improves efficiency, bandwidth utilization and delivery delay, all at the same time. Furthermore, in the face of failures overlay neighborhoods enable fast reconstruction of forwarding paths in the system without compromising its reliability and availability. Finally, as an added benefit of overlay neighborhoods, this thesis develops large-scale algorithms that bring the advantages of the pub/sub model to the domain of file sharing and bulk content dissemination applications. Experimental evaluation results with deployments as large as 1000 nodes illustrate that the pub/sub system scales well and outperforms the traditional BitTorrent protocol in terms of content dissemination delay.
153

Design and Performance Evaluation of Service Discovery Protocols for Vehicular Networks

Abrougui, Kaouther 28 September 2011 (has links)
Intelligent Transportation Systems (ITS) are gaining momentum among researchers. ITS encompasses several technologies, including wireless communications, sensor networks, data and voice communication, real-time driving assistant systems, etc. These states of the art technologies are expected to pave the way for a plethora of vehicular network applications. In fact, recently we have witnessed a growing interest in Vehicular Networks from both the research community and industry. Several potential applications of Vehicular Networks are envisioned such as road safety and security, traffic monitoring and driving comfort, just to mention a few. It is critical that the existence of convenience or driving comfort services do not negatively affect the performance of safety services. In essence, the dissemination of safety services or the discovery of convenience applications requires the communication among service providers and service requesters through constrained bandwidth resources. Therefore, service discovery techniques for vehicular networks must efficiently use the available common resources. In this thesis, we focus on the design of bandwidth-efficient and scalable service discovery protocols for Vehicular Networks. Three types of service discovery architectures are introduced: infrastructure-less, infrastructure-based, and hybrid architectures. Our proposed algorithms are network layer based where service discovery messages are integrated into the routing messages for a lightweight discovery. Moreover, our protocols use the channel diversity for efficient service discovery. We describe our algorithms and discuss their implementation. Finally, we present the main results of the extensive set of simulation experiments that have been used in order to evaluate their performance.
154

Fault-tolerant Cache Coherence Protocols for CMPs

Fernández Pascual, Ricardo 23 July 2007 (has links)
We propose a way to deal with transient faults in the interconnection network of many-core CMPs that is different from the classic approach of building a fault-tolerant interconnection network. In particular, we provide fault tolerance mechanisms at the level of the cache coherence protocol so that it guarantees the correct execution of programs even when the underlying interconnection network does not deliver all messages correctly. This way, we can take advantage of the different meaning of each message to achieve fault tolerance with lower overhead than at the level of the interconnection network, which has to treat all messages alike with respect to reliability.We design several fault-tolerant cache coherence protocols using these techniques and evaluate them. This evaluation shows that, in absence of faults, our techniques do not increase significantly the execution time of the applications and their major cost is an increase in network traffic due to acknowledgment messages that ensure the reliable transference of ownership between coherence nodes, which are sent out of the critical path of cache misses. In addition, a system using our protocols degrades gracefully when transient faults actually happen and can support fault rates much higher than those expected in the real world with only a small performance degradation. / Se proponen una forma de tratar con los fallos transitorios en la red de interconexión de un CMP con gran número de núcleos que es diferente del enfoque clásico basado en construir una red de interconexión tolerante a fallos. En particular se proporcionan mecanismos de tolerancia a fallos al nivel del protocolo de coherencia. De esta forma, se puede aprovechar el conocimiento que el protocolo tiene sobre el significado de cada mensaje para obtener tolerancia a fallos con menor sobrecarga que en el nivel de red, que tiene que tratar todos los mensajes idénticamente.En la tesis se diseñan y evalúan varios protocolos de coherencia utilizando estas técnicas. Los resultados muestran que, cuando no hay fallos, nuestras técnicas no incrementan significativamente el tiempo de ejecución de las aplicaciones y su mayor coste es un incremento en el tráfico de red. Además, un sistema que use nuestros protocolos soporta tasas de fallos mucho mayores que las esperadas en circunstancias realistas y su rendimiento se degrada gradualmente cuando ocurren los fallos.
155

On Magic State Distillation using Nuclear Magnetic Resonance

Hubbard, Adam A. January 2008 (has links)
Physical implementations of quantum computers will inevitably be subject to errors. However, provided that the error rate is below some threshold, it is theoretically possible to build fault tolerant quantum computers that are arbitrarily reliable. A particularly attractive fault tolerant proposal, due to its high threshold value, relies on Clifford group quantum computation and access to ancilla qubits. These ancilla qubits must be prepared in a particular state termed the 'magic' state. It is possible to distill faulty magic states into pure magic states, which is of significant interest for experimental work where perfect state preparation is generally not possible. This thesis describes a liquid state nuclear magnetic resonance based scheme for distilling magic states. Simulations are presented that indicate that such a distillation is feasible if a high level of experimental control is achieved. Preliminary experimental results are reported that outline the challenges that must be overcome to attain such precise control.
156

On Fault-based Attacks and Countermeasures for Elliptic Curve Cryptosystems

Dominguez Oviedo, Agustin January 2008 (has links)
For some applications, elliptic curve cryptography (ECC) is an attractive choice because it achieves the same level of security with a much smaller key size in comparison with other schemes such as those that are based on integer factorization or discrete logarithm. Unfortunately, cryptosystems including those based on elliptic curves have been subject to attacks. For example, fault-based attacks have been shown to be a real threat in today’s cryptographic implementations. In this thesis, we consider fault-based attacks and countermeasures for ECC. We propose a new fault-based attack against the Montgomery ladder elliptic curve scalar multiplication (ECSM) algorithm. For security reasons, especially to provide resistance against fault-based attacks, it is very important to verify the correctness of computations in ECC applications. We deal with protections to fault attacks against ECSM at two levels: module and algorithm. For protections at the module level, where the underlying scalar multiplication algorithm is not changed, a number of schemes and hardware structures are presented based on re-computation or parallel computation. It is shown that these structures can be used for detecting errors with a very high probability during the computation of ECSM. For protections at the algorithm level, we use the concepts of point verification (PV) and coherency check (CC). We investigate the error detection coverage of PV and CC for the Montgomery ladder ECSM algorithm. Additionally, we propose two algorithms based on the double-and-add-always method that are resistant to the safe error (SE) attack. We demonstrate that one of these algorithms also resists the sign change fault (SCF) attack.
157

On Magic State Distillation using Nuclear Magnetic Resonance

Hubbard, Adam A. January 2008 (has links)
Physical implementations of quantum computers will inevitably be subject to errors. However, provided that the error rate is below some threshold, it is theoretically possible to build fault tolerant quantum computers that are arbitrarily reliable. A particularly attractive fault tolerant proposal, due to its high threshold value, relies on Clifford group quantum computation and access to ancilla qubits. These ancilla qubits must be prepared in a particular state termed the 'magic' state. It is possible to distill faulty magic states into pure magic states, which is of significant interest for experimental work where perfect state preparation is generally not possible. This thesis describes a liquid state nuclear magnetic resonance based scheme for distilling magic states. Simulations are presented that indicate that such a distillation is feasible if a high level of experimental control is achieved. Preliminary experimental results are reported that outline the challenges that must be overcome to attain such precise control.
158

On Fault-based Attacks and Countermeasures for Elliptic Curve Cryptosystems

Dominguez Oviedo, Agustin January 2008 (has links)
For some applications, elliptic curve cryptography (ECC) is an attractive choice because it achieves the same level of security with a much smaller key size in comparison with other schemes such as those that are based on integer factorization or discrete logarithm. Unfortunately, cryptosystems including those based on elliptic curves have been subject to attacks. For example, fault-based attacks have been shown to be a real threat in today’s cryptographic implementations. In this thesis, we consider fault-based attacks and countermeasures for ECC. We propose a new fault-based attack against the Montgomery ladder elliptic curve scalar multiplication (ECSM) algorithm. For security reasons, especially to provide resistance against fault-based attacks, it is very important to verify the correctness of computations in ECC applications. We deal with protections to fault attacks against ECSM at two levels: module and algorithm. For protections at the module level, where the underlying scalar multiplication algorithm is not changed, a number of schemes and hardware structures are presented based on re-computation or parallel computation. It is shown that these structures can be used for detecting errors with a very high probability during the computation of ECSM. For protections at the algorithm level, we use the concepts of point verification (PV) and coherency check (CC). We investigate the error detection coverage of PV and CC for the Montgomery ladder ECSM algorithm. Additionally, we propose two algorithms based on the double-and-add-always method that are resistant to the safe error (SE) attack. We demonstrate that one of these algorithms also resists the sign change fault (SCF) attack.
159

On Error Detection and Recovery in Elliptic Curve Cryptosystems

Alkhoraidly, Abdulaziz Mohammad January 2011 (has links)
Fault analysis attacks represent a serious threat to a wide range of cryptosystems including those based on elliptic curves. With the variety and demonstrated practicality of these attacks, it is essential for cryptographic implementations to handle different types of errors properly and securely. In this work, we address some aspects of error detection and recovery in elliptic curve cryptosystems. In particular, we discuss the problem of wasteful computations performed between the occurrence of an error and its detection and propose solutions based on frequent validation to reduce that waste. We begin by presenting ways to select the validation frequency in order to minimize various performance criteria including the average and worst-case costs and the reliability threshold. We also provide solutions to reduce the sensitivity of the validation frequency to variations in the statistical error model and its parameters. Then, we present and discuss adaptive error recovery and illustrate its advantages in terms of low sensitivity to the error model and reduced variance of the resulting overhead especially in the presence of burst errors. Moreover, we use statistical inference to evaluate and fine-tune the selection of the adaptive policy. We also address the issue of validation testing cost and present a collection of coherency-based, cost-effective tests. We evaluate variations of these tests in terms of cost and error detection effectiveness and provide infective and reduced-cost, repeated-validation variants. Moreover, we use coherency-based tests to construct a combined-curve countermeasure that avoids the weaknesses of earlier related proposals and provides a flexible trade-off between cost and effectiveness.
160

Technology Impacts of CMOS Scaling on Microprocessor Core Design for Hard-Fault Tolerance in Single-Core Applications and Optimized Throughput in Throughput-Oriented Chip Multiprocessors

Bower, Fred January 2010 (has links)
<p>The continued march of technological progress, epitomized by Moore’s Law provides the microarchitect with increasing numbers of transistors to employ as we continue to shrink feature geometries. Physical limitations impose new constraints upon designers in the areas of overall power and localized power density. Techniques to scale threshold and supply voltages to lower values in order to reduce power consumption of the part have also run into physical limitations, exacerbating power and cooling problems in deep sub-micron CMOS process generations. Smaller device geometries are also subject to increased sensitivity to common failure modes as well as manufacturing process variability.</p> <p>In the face of these added challenges, we observe a shift in the focus of the industry, away from building ever–larger single–core chips, whose focus is on reducing single–threaded latency toward a design approach that employs multiple cores on a single chip to improve throughput. While the early multicore era utilized the existing single–core designs of the previous generation in small numbers, subsequent generations have introduced cores tailored to multicore use. These cores seek to achieve power-efficient throughput and have led to a new emphasis on throughput-oriented computing, particularly for Internet workloads, where the end-to-end computational task is dominated by long–latency network operations. The ubiquity of these workloads makes a compelling argument for throughput–oriented designs, but does not free the microarchitect fully from latency demands of common workloads in enterprise and desktop application spaces.</p> <p>We believe that a continued need for both throughput–oriented and latency–sensitive processors will exist in coming generations of technology. We further opine that making effective use of the additional transistors that will be available may require different techniques for latency–sensitive designs than for throughput–oriented ones, since we may trade latency or throughput for the desired attribute of a core in each of the respective paradigms.</p> <p>We make three major contributions with this thesis. Our first contribution is a fine–grained fault diagnosis and deconfiguration technique for array structures, such as the ROB, within the microprocessor core. We present and evaluate two variants of this technique. The first variant uses an existing fault detection and correction technique whose scope is the processor core execution pipeline to ensure correct processor operation. The second variant integrates fault detection and correction into the array structure itself to provide a self–contained, fine–grained, fault detection, diagnosis, and repair technique.</p> <p>In our second contribution, we develop a lightweight, fine–grained fault diagnosis mechanism for the processor core. In this work, we leverage the first contribution's methods to provide deconfiguration of faulty array elements. We additionally extend the scope of that work to include all pipeline circuitry from instruction issue to retirement.</p> <p>In our third and final contribution, we focus on throughput–oriented core data cache design. In this work, we study the demands of the throughput–oriented core running a representative workload and then propose and evaluate an alternative data cache implementation that more closely matches the demands of the core. We then show that a better–matched cache design can be exploited to provide improved throughput under a fixed power budget.</p> <p>Our results show that typical latency–sensitive cores have sufficient redundancy to make finegrained hard–fault tolerance an affordable alternative for hardening complex designs. Our designs suffer little or no performance loss when no faults are present and retain nearly the same performance characteristics in the presence of small numbers of hard faults in protected structures. In our study of the latency–sensitive core, we have shown that SRAM–based designs have low latencies that end up providing less benefit to a throughput–oriented core and workload than a better–fitted data cache composed of DRAM. The move from a high–power, fast technology to a lower–power, slower technology allows us to increase L1 data cache capacity, which is a net benefit for the throughput–oriented core.</p> / Dissertation

Page generated in 0.0396 seconds