• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • Tagged with
  • 89
  • 89
  • 89
  • 89
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Low power techniques for global communication in CMOS VLSI

Stan, Mircea Raducu 01 January 1996 (has links)
Technology trends and especially portable applications are adding a third dimension (power) to the previously two-dimensional (speed, area) VLSI design space. A large portion of power dissipation in high performance CMOS VLSI is due to the inherent difficulties in global communication at high rates and we propose several approaches to address the problem. These techniques can be generalized at different levels in the design process. Global communication typically involves driving large capacitive loads which inherently require significant power. However, by carefully choosing the data representation, or encoding, of these signals, the average and peak power dissipation can be minimized. Redundancy can be added in space (number of bus lines), time (number of cycles) and voltage (number of distinct amplitude levels). The proposed codes can be used on a class of terminated off-chip board-level buses with level signaling, or on tri-state on-chip buses with level or transition signaling. At the circuit level we propose novel line drivers using charge recovery which can reduce power dissipation for large loads by 20-30%. Both a single-ended and a differential driver, using two-step charging and discharging of the line capacitances, are proposed. The circuits are approximately twice as large and slightly slower than equivalent inverter chains and can be used either as bus line drivers or as clock drivers.
32

Diffraction-based models for iridescent colors in computer-generated imagery

Thorman, Stephen Craig 01 January 1996 (has links)
This work presents shading models for the iridescent colors produced by diffraction gratings suitable for computer graphics image generation. Models are presented that are based on one, two, and three dimensional diffraction. Shading algorithms based on these models can be directly incorporated into standard shading routines. These models allow for the computer generation of realistic coloration of surfaces such as CD-ROMs and opals in computer generated imagery. Examples of such surface textures are given.
33

Security broker for multimedia wireless local area networks

Park, Se Hyun 01 January 1999 (has links)
Multimedia Wireless Local Area Networks (MWLANs) supporting various applications require secure communication for both multimedia and data applications. With the increased popularity of network and internet applications, the need for protecting information has emerged among all levels of users. This security issue is much more emphasized in wireless communication scenarios. To minimize the deterioration of multimedia applications performance, the design of security mechanisms for WLANs must consider the WLAN unique characteristics. In addition, malicious attacks can occur in many more locations in the WLAN environment than in a wired network. Therefore, security support functions need to be designed accordingly. In this dissertation, we employ the Security Broker that uses one or more security mechanisms to support reliable end-to-end secure multimedia applications. The introduced Security Broker includes the following novel security mechanisms: Authentication and Privacy Protocols, Re-Authentication Protocol and Inline Security Layer. The Authentication and Privacy Protocol can be applied within the PCF (Point Coordination Function) of IEEE 802.11 standard. With the integration of polling based PCF and the use of the session code, we achieve an effective authentication and privacy mechanism. The goal of Re-Authentication Protocol is to provide continuous security support during a specific session. The proposed Re-Authentication Protocol which includes a key exchange procedure has a low computational complexity. Our new Inline Security Layer, is proposed and implemented to support bulk encryption for MWLANs. Instead of using dedicated cryptography hardware, the proposed security layer considers the WLAN characteristics. The proposed and implemented layer has the following advantages: (1) is less expensive to use and implement than dedicated hardware, and (2) is more flexible for upgrades. We have implemented the Security Broker and pursued extensive experimentation. We set up the test platform with two computers that communicated via WLAN adapters. In cases CPU and memory resources are sufficient, we observed no degradation in the multimedia applications quality of service in terms of throughput and delay. Future generation of computers will provide us with abundance in CPU and memory resources. Therefore, we foresee software encryption as a viable approach for wireless LANs that support multimedia applications and communication.
34

Re-configurable robust QoS supporting wired and wireless LANs

Phonphoem, Anan 01 January 2000 (has links)
The traffic in current networks, including wired and wireless Local Area Networks (LAN), is dominated by the multimedia applications that carry voice, video and data traffic. Video and voice traffic are delay-sensitive and require specific Quality of Service (QoS) from the network, such as guaranteed bandwidth, maximum delay, and maximum delay variance. Bandwidth management protocols that provide QoS support have to be designed with reliability and fault tolerance in mind. In this dissertation we have developed a robust and reliable QoS supportive wireless and wired LAN architecture. The proposed methods have been implemented in a software framework which runs on a tightly controllable PC-Windows based experimental testbed that we developed. The testbed can be easily re-configured and supports wired, wireless, and hybrid LAN setups. The testbed can be used to debug and test QoS management aspects such as new media access control techniques, reliability support modules, admission control, and so on. The developed testbed and software framework have the following unique features: network card independence, application independence, TCP/IP protocol stack compliant, and easy to control and plug in new QoS modules. We have developed a SuperPoll module which provides a performance enhanced polling system with QoS support in noisy environments, a Shadow module which provides a reliable and robust polling based system in presence of arbiter fault, and an In-service monitoring module which monitors the QoS in LANs. The Shadow and In-service monitoring modules were implemented in the testbed. We believe that the results and methodology presented in this dissertation will serve as guidelines for future generations of designers for QoS supportive robust and reliable wired and wireless local area networks.
35

BDD-based logic synthesis system

Yang, Congguang 01 January 2000 (has links)
Binary decision diagrams (BDDs) is the most efficient Boolean logic representation found so far. In this dissertation, a new BDD-based logic synthesis system is presented. The system is based on a new BDD decomposition theory which supports both algebraic and Boolean factorization. New techniques, which are crucial to the manipulation of BDDs in a partitioned Boolean network environment, are described in detail. The experimental results show that our logic synthesis system has a capability to handle very large circuits. It offers a superior run-time advantage over the state-of-the-art logic synthesis system, with comparable results in terms of circuit area and often improved delay.
36

Imaging and video compression using embedded zerotree coding

Yin, Che-Yi 01 January 2000 (has links)
In this dissertation, we investigate several embedded zerotree wavelet (EZW) coding techniques for designing image and video coders. Four topics addressed include: (1) EZW coding using non-uniform quantization, (2) Adaptive EZW coding using rate-distortion criterion, (3) Modified “set partitioning in hierarchical trees” (SPIHT) and (4) Video coding using segmentation, regional wavelet packet, and adaptive EZW. The first three topics are applications for image compression, and the last topic is an application for video compression. The embedded zerotree wavelet image compression algorithm developed by Shapiro is the most popular wavelet-based image coder to date. First, we aim to modify the quantization characteristics of EZW by the following two approaches: (1) Non-uniform quantizer: we design two non-uniform quantization schemes, nonuniform EZW and Lloyd-Max EZW. (2) Rate-distortion criterion: we develop adaptive EZW, where we introduce adaptive step sizes for each subband. The best set of step sizes is found by using Lagrangian optimization, where two coding environments, independent and dependent, are considered. The proposed image coder retains all the good features of EZW, namely, embedded coding, progressive transmission, order of importance. Experimental results show that the proposed image coders perform significantly better than the standard EZW algorithm. Then, we aim at the significance map coding of EZW by designing a new set partition algorithm. We adopt and modify the framework of the SPIHT. The new set partition algorithm can catch more insignificant coefficients than the original algorithm. The experimental results show that the proposed algorithms achieve significant improvement over the standard EZW and SPIHT algorithms. Finally, we present a new video coding algorithm. This algorithm can effectively exploit the temporal, spatial and frequency information within a video sequence through a combination of segmentation, regional wavelet packet, adaptive EZW, and DPCM techniques. The proposed video coder provides notable improvement in R-D characteristics over similar algorithms.
37

Stochastic models for network traffic

Misra, Vishal 01 January 2000 (has links)
Traffic modeling is an integral part of teletraffic analysis for engineering telecommunication networks. In this dissertation, we develop a Hierarchical model for teletraffic. The model is motivated by the physical nature of the generation of the traffic. We present an analysis of the model from a signal theoretic point of view, explaining some of the recent observations of network traffic. We also provide a novel technique to model TCP traffic, one of the most important components of a layer of our hierarchy. We develop analysis techniques for our model. The predictions of our model match experiments done on the Internet well. We extend our TCP model to describe a complete system of networks of active queue management routers carrying TCP traffic. We develop a numerical scheme to obtain performance metrics of such networks. Our numerical scheme matches well with simulations and we are able to get an in-depth understanding of RED, one of the more popular active queue management schemes.
38

Measurement and modeling of packet loss in the Internet

Yajnik, Maya Kirit 01 January 2000 (has links)
The introduction of services and applications to the Internet has spurred development of new protocols which provide reliability, congestion control and flow control. Multicasting, which enables group communication, is one such promising new service which has made a new class of applications possible. In addition, multimedia applications have become increasingly important applications. Understanding and modeling the patterns of packet loss, as it occurs in Internet connections, are crucial to the design of these new applications and the protocols that support them. The goal of this thesis is to characterize and model the measured packet loss in Internet connections for the design of new applications and protocols. First, we analyze the correlation of packet loss in multicast sessions. We consider the spatial correlation (between receivers in a multicast session) as well as the temporal correlation (the correlation with respect to time) as seen in measurements of packet loss. The measurements are taken on the MBone multicast network, an experimental network superimposed on the Internet. We also address the related issue of where loss occurs in the network by estimating the loss rates on different parts of the multicast distribution tree. Next, we focus on the temporal correlation of packet loss along both regular point-to-point connections as well as multicast connections. We estimate the correlation timescale of the measured data. We also estimate the level of model complexity required to accurately capture the observed temporal correlation and evaluate the validity of previously proposed models (the Bernoulli model and the two-state model). Finally, we examine the accuracy of probe measurements for estimating both the time-averaged congestion level of a network path, as well as the packet loss rate seen by traffic traversing the path. Our goal is to determine the circumstances under which the network performance characteristics estimated via probe data match the actual network performance as well as the end-to-end performance seen by an application. We conclude this dissertation with a discussion of future research.
39

Compiler -assisted hardware-based data prefetching for next generation processors

Guo, Yao 01 January 2007 (has links)
Prefetching has emerged as one of the most successful techniques to bridge the gap between modern processors and memory systems. On the other hand, as we move to the deep sub-micron era, power consumption has become one of the most important design constraints besides performance. Intensive research efforts have been done on data prefetching focusing on performance improvement, however, as far as we know, the energy aspects of prefetching have not been fully investigated. This dissertation investigates data prefetching techniques for next-generation processors targeting both energy-effciency and performance speedup. We first evaluate a number of state-of-the-art data prefetching techniques from an energy perspective and identify the main energy-consuming components due to prefetching. We then propose a set of compiler-assisted energy-aware techniques to make hardware-based data prefetching more energy-efficient. From our evaluation on a number of data prefetching techniques, we have found that if leakage is optimized with recently proposed circuit-level techniques, most of the energy overhead of hardware data prefetching comes from prefetch hardware related costs and unnecessary L1 data cache lookups related to prefetches that hit in the L1 cache. This energy overhead on the memory system can be as much as 30%. We propose a set of power-aware prefetch filtering techniques to reduce the energy overhead of hardware data prefetching techniques. Our proposed techniques include three compiler-based filtering approaches that make the prefetch predictor more energy efficient. We also propose a hardware-based filtering technique to further reduce the energy overhead due to unnecessary prefetching in the L1 data cache. The energy-aware filtering techniques combined could reduce up to 40% of the energy overhead introduced due to aggressive prefetching with almost no performance degradation. We also develop a location-set driven data prefetching technique to further reduce the energy consumption of prefetching hardware. In this scheme, we use a power-aware prefetch engine with a novel design of an indexed hardware history table. With the help of compiler-based location-set analysis, we show that the proposed prefetching scheme reduces the energy consumed by the prefetch history table by 7-11X with very small impact on performance. Our experiments show that the proposed techniques could overcome the prefetching-related energy overhead in most applications, improving the energy-delay product by 33% on average. For many applications studied, our work has transformed data prefetching into not only a performance improvement mechanism, but an energy saving technique as well.
40

Validation of behavioral hardware descriptions

Zhang, Qiushuang 01 January 2003 (has links)
Behavioral hardware descriptions are commonly used to represent the functionality of a microelectronic system for simulation and synthesis. The manual process of creating a behavioral description is prone to errors, so a significant effort must be made to verify the correctness of the behavioral descriptions. Simulation-based validation and formal verification are two techniques used to verify correctness of designs. We have investigated validation because formal verification techniques are frequently intractable for large designs. The first step toward a behavioral validation technique is the development of validation fault coverage metrics which can be used to evaluate the likelihood of design error detection with a given test sequence. Design faults can be classified into a variety of classes. The hardest faults are those which present incorrect behavior only in rare corner cases. We developed three fault coverage metrics to target these corner case faults. First, the domain fault coverage detects faults on the domain boundaries by examining the test points near the boundaries since a small domain fault may only affect several points near the boundary. Second, the dataflow fault coverage metric systematically checks the coverage of selected dataflow paths, which can detect faults associated with the dataflow paths. Third, the mis-timed event (MTE) fault coverage metric detects faults that present erroneous behavior only given a critical timing sequence. These new metrics can be also adapted to the validation of hardware-software systems. Experimental results show great potential of these metrics to detect design errors of their specific classes.

Page generated in 0.1973 seconds