• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 9
  • 9
  • 7
  • 2
  • 2
  • 1
  • Tagged with
  • 89
  • 36
  • 19
  • 17
  • 16
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modeling of Simultaneous Switching Noise in On-Chip and Package Power Distribution Networks Using Conformal Mapping, Finite Difference Time Domain and Cavity Resonator Methods

Mao, Jifeng 29 October 2004 (has links)
This thesis focused on modeling and simulation of simultaneous switching noise in packages as well as integrated circuits and the focus was mainly on the latter. Efficient and accurate methods have been developed for modeling the coupling due to SSN in multi-layered planes arising in electronic packages, extraction of the power grid in integrated circuits and simulation of the power supply noise in large size networks arising in power distribution network. These methods include conformal mapping, finite difference time domain and cavity resonator methods, using which the electrical performance of the power distribution system in a high speed electronic product can be predicted. The model developed for field penetration captured the effect of the magnetic field penetrating through planes in multi-layered packages. Analytical model for the extraction of the interconnect parasitics for a regular on-chip power grid has been presented. Complex image technique has been applied for modeling the dispersive interconnect on lossy silicon substrate. The Debye rational approximation has been used to approximate the RLGC parameters in order to simulate the frequency dependent elements in the time domain. The simulation of the entire network of the full-chip power grid has been carried out using the modified FDTD expressions. Several aspects of characterizing the generic on-chip power distribution network have been presented. The crossover capacitance has been evaluated using analytical model derived from conformal mapping. An analytical model has been proposed to extract parameters of on-chip multi-conductor transmission lines, which guarantees the stability and is applicable to general distribution of multi-conductor transmission lines. The above modeling procedures have been incorporated into a computer program, which generates the power grid model from the layout of chip power distribution networks automatically. Research on 3-D on-chip power distribution networks has been presented. The complex image technique has been extended from microstrip-type interconnects to stripline-type interconnects. Macromodel images have been derived with closed form expressions to capture the loss mechanism of the multiple conductive substrates. The effect of 3-D integration on switching noise has been illustrated in the time domain using examples.
12

Forward Error Correction for Packet Switched Networks

Valverde Martínez, David, Parada Otte, Francisco Javier January 2008 (has links)
<p>The main goal in this thesis is to select and test Forward Error Correction (FEC) schemes suitable for network video transmission over RTP/UDP. There is a general concern in communication networks which is to achieve a tradeoff between reliable transmission and the delay that it takes. Our purpose is to look for techniques that improve the reliability while the realtime delay constraints are fulfilled. In order to achieve it, the FEC techniques focus on recovering the packet losses that come up along any transmission. The FEC schemes that we have selected are Parity Check algorithm, ReedSolomon (RS) codes and a Convolutional code. Simulations are performed to test the different schemes.</p><p>The results obtained show that the RS codes are the more powerful schemes in terms of recovery capabilities. However they can not be deployed for every configuration since they go beyond the delay threshold. On the other hand, despite of the Parity Check codes being the less efficient in terms of error recovery, they show a reasonable low delay. Therefore, depending on the packet loss probability that we are working with, we may chose one or other of the different schemes. To summarize, this thesis includes a theoretical background, a thorough analysis of the FEC schemes chosen, simulation results, conclusions and proposed future work.</p>
13

Lossy Transmission Line Modeling and Simulation Using Special Functions

Zhong, Bing January 2006 (has links)
A new algorithm for modeling and simulation of lossy interconnect structures modeled by transmission lines with Frequency Independent Line Parameters (FILP) or Frequency Dependent Line Parameters (FDLP) is developed in this research. Since frequency-dependent RLGC parameters must be employed to correctly model skin effects and dielectric losses for high-performance interconnects, we first study the behaviors of various lossy interconnects that are characterized by FILP and FDLP. Current general macromodeling methods and Model Order Reduction (MOR) algorithms are discussed. Next, some canonical integrals that are associated with transient responses of lossy transmission lines with FILP are presented. By using contour integration techniques, these integrals can be represented as closed-form expressions involving special functions, i.e., Incomplete Lipshitz-Hankel Integrals (ILHIs) and Complementary Incomplete Lipshitz-Hankel Integrals (CILHIs). Various input signals, such as ramp signals and the exponentially decaying sine signals, are used to test the expressions involving ILHIs and CILHIs. Excellent agreements are observed between the closed-form expressions involving ILHIs and CILHIs and simulation results from commercial simulation tools. We then developed a frequency-domain Dispersive Hybrid Phase-Pole Macromodel (DHPPM) for lossy transmission lines with FDLP, which consists of a constant RLGC propagation function multiplied by a residue series. The basic idea is to first extract the dominant physical phenomenology by using a propagation function in the frequency domain that is modeled by FILP. A rational function approximation is then used to account for the remaining effects of FDLP lines. By using a partial fraction expansion and analytically evaluating the required inverse Fourier transform integrals, the time-domain DHPPM can be decomposed as a sum of canonical transient responses for lines with FILP for various excitations (e.g., trapezoidal and unit-step). These canonical transient responses are then expressed analytically as closed-form expressions involving ILHIs, CILHIs, and Bessel functions. The DHPPM simulator can simulate transient results for various input waveforms on both single and coupled interconnect structures. Comparisons between the DHPPM results and the results produced by commercial simulation tools like HSPICE and a numerical Inverse Fast Fourier Transform (IFFT) show that the DHPPM results are very accurate.
14

Low-complexity methods for image and video watermarking

Coria Mendoza, Lino Evgueni 05 1900 (has links)
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided. First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity. Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions.
15

Low-complexity methods for image and video watermarking

Coria Mendoza, Lino Evgueni 05 1900 (has links)
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided. First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity. Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions.
16

Low-complexity methods for image and video watermarking

Coria Mendoza, Lino Evgueni 05 1900 (has links)
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided. First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity. Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
17

HTTP Traffic Analysis based on a Lossy Packet-level Trace

Zhao, Song 23 August 2013 (has links)
No description available.
18

On the Rate-Distortion-Perception Tradeoff for Lossy Compression

Qian, Jingjing January 2023 (has links)
Deep generative models when utilized in lossy image compression tasks can reconstruct realistic looking outputs even at extremely low bit-rates, while traditional compression methods often exhibit noticeable artifacts under similar conditions. As a result, there has been a substantial surge of interest in both the information theoretic aspects and the practical architectures of deep learning based image compression. This thesis makes contributions to the emerging framework of rate-distortion-perception theory. The main results are summarized as follows: 1. We investigate the tradeoff among rate, distortion, and perception for binary sources. The distortion considered here is the Hamming distortion and the perception quality is measured by the total variation distance. We first derive a closed-form expression for the rate-distortion-perception tradeoff in the one-shot setting. This is followed by a complete characterization of the achievable distortion-perception region for a general representation. We then consider the universal setting in which the encoder is one-size-fits-all, and derive upper and lower bounds on the minimum rate penalty. Finally, we study successive refinement for both point-wise and set-wise versions of perception-constrained lossy compression. A necessary and sufficient condition for point-wise successive refinement and a sufficient condition for the successive refinability of universal representations are provided. 2. Next, we characterize the expression for the rate-distortion-perception function of vector Gaussian sources, which extends the result in the scalar counterpart, and show that in the high-perceptual-quality regime, each component of the reconstruction (including high-frequency components) is strictly correlated with that of the source, which is in contrast to the traditional water-filling solution. This result is obtained by optimizing over all possible encoder-decoder pairs subject to the distortion and perception constraints. We then consider the notion of universal representation where the encoder is fixed and the decoder is adapted to achieve different distortion-perception pairs. We characterize the achievable distortion-perception region for a fixed representation and demonstrate that the corresponding distortion-perception tradeoff is approximately optimal. Our findings significantly enrich the nascent rate-distortion-perception theory, establishing a solid foundation for the field of learned image compression. / None / Doctor of Philosophy (PhD)
19

Dynamic Sink Deployment Strategies

Xiong, Jinfeng January 2022 (has links)
The IoT sensing system plays an important role in the field of the smart city. IoT devices are generally constrained nodes due to their limited power and memory. How to save energy has been a challenge for the scalability of sensing networks. Previous studies introduce the dynamic sink and three dynamic sink deployment strategies. It has been proved by simulation experiments that the sensing network with dynamic sinks can reduce energy consumption. Further investigations on new dynamic sink deployment strategies are needed to explore the full potential of dynamic sinks. This work investigates three new deployment strategies, namely Determinisitic Strategy, Prediction Strategy, and Improved Prediction Strategy. We design experiments with different scenarios and evaluate the packet delivery ratio (PDR) and power consumption performances using emulated IoT devices on the Cooja simulator. The results show that the setups with these three new deployment strategies have good performance in terms of PDR and power consumption. Furthermore, we compare the performance difference between these three new strategies. The Improved Prediction Strategy has advantages over the other two strategies and has application prospects in reality. / IoT-baserade sensorsystem spelar en viktig roll för smarta städer. IoT-enheter är i allmänhet begränsade noder vad gäller till exempel kraftförsörjning och minnesutrymme. Hur man kan spara energi har varit en utmaning för skalbarheten hos sensornätverk. I tidigare studier introduceras dynamiska sänknoder och tre strategier för utplacering av sådana sänknoder. Det har visat sig genom simuleringsexperiment att ett nätverk med dynamiska sänknoder kan minska energiförbrukningen. Ytterligare undersökningar av nya strategier för utplacering av sänknoder behövs för att utforska den fulla potentialen hos dynamiska sänknoder. I det här arbetet undersöks tre nya strategier, nämligen Determinisitic Strategy, Prediction Strategy och Improved Prediction Strategy. Vi utformar experiment med olika scenarier och utvärderar andelen levererade paket (Packet Delivery Ration", PDR) och energiförbrukningen med hjälp av emulerade IoT-enheter i Cooja-simulatorn. Resultaten visar att uppställningarna med dessa tre nya strategier har bra prestanda när det gäller PDR och energiförbrukning. Dessutom jämför vi prestandaskillnaden mellan dessa tre nya strategier. Improved Prediction Strategy har fördelar jämfört med de andra två strategierna och bedöms ha goda tillämpningsmöjligheter i verkliga miljöer.
20

A REAL-TIME HIGH PERFORMANCE DATA COMPRESSION TECHNIQUE FOR SPACE APPLICATIONS

Yeh, Pen-Shu, Miller, Warner H. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2000.

Page generated in 0.0205 seconds