• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 9
  • 9
  • 7
  • 2
  • 2
  • 1
  • Tagged with
  • 87
  • 34
  • 17
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Komprese obrazu pomocí vlnkové transformace / Image Compression Using the Wavelet Transform

Kaše, David January 2015 (has links)
This thesis deals with image compression using wavelet, contourlet and shearlet transformation. It starts with quick look at image compression problem a quality measurement. Next are presented basic concepts of wavelets, multiresolution analysis and scaling function and detailed look at each transform. Representatives of algorithms for coeficients coding are EZW, SPIHT and marginally EBCOT. In second part is described design and implementation of constructed library. Last part compare result of transforms with format JPEG 2000. Comparison resulted in determining type of image in which implemented contourlet and shearlet transform were more effective than wavelet. Format JPEG 2000 was not exceeded.
62

Komprese signálů EKG nasnímaných pomocí mobilního zařízení / Compression of ECG signals recorded using mobile ECG device

Had, Filip January 2017 (has links)
Signal compression is necessary part for ECG scanning, because of relatively big amount of data, which must be transmitted primarily wirelessly for analysis. Because of the wireless sending it is necessary to minimize the amount of data as much as possible. To minimize the amount of data, lossless or lossy compression algorithms are used. This work describes an algorithm SPITH and newly created experimental method, based on PNG, and their testing. This master’s thesis there is also a bank of ECG signals with parallel sensed accelerometer data. In the last part, modification of SPIHT algorithm, which uses accelerometer data, is described and realized.
63

Secure Authenticated Key Exchange for Enhancing the Security of Routing Protocol for Low-Power and Lossy Networks

Alzahrani, Sarah Mohammed 26 May 2022 (has links)
No description available.
64

Efficient Graph Summarization of Large Networks

Hajiabadi, Mahdi 24 June 2022 (has links)
In this thesis, we study the notion of graph summarization, which is a fundamental task of finding a compact representation of the original graph called the summary. Graph summarization can be used for reducing the footprint of the input graph, better visualization, anonymizing the identity of users, and query answering. There are two different frameworks of graph summarization we consider in this thesis, the utility-based framework and the correction set-based framework. In the utility-based framework, the input graph is summarized until a utility threshold is not violated. In the correction set-based framework a set of correction edges is produced along with the summary graph. In this thesis we propose two algorithms for the utility-based framework and one for the correction set-based framework. All these three algorithms are for static graphs (i.e. graphs that do not change over time). Then, we propose two more utility-based algorithms for fully dynamic graphs (i.e. graphs with edge insertions and deletions). Algorithms for graph summarization can be lossless (summarizing the input graph without losing any information) or lossy (losing some information about the input graph in order to summarize it more). Some of our algorithms are lossless and some lossy, but with controlled utility loss. Our first utility-driven graph summarization algorithm, G-SCIS, is based on a clique and independent set decomposition, that produces optimal compression with zero loss of utility. The compression provided is significantly better than state-of-the-art in lossless graph summarization, while the runtime is two orders of magnitude lower. Our second algorithm is T-BUDS, a highly scalable, utility-driven algorithm for fully controlled lossy summarization. It achieves high scalability by combining memory reduction using Maximum Spanning Tree with a novel binary search procedure. T-BUDS outperforms state-of-the-art drastically in terms of the quality of summarization and is about two orders of magnitude better in terms of speed. In contrast to the competition, we are able to handle web-scale graphs in a single machine without performance impediment as the utility threshold (and size of summary) decreases. Also, we show that our graph summaries can be used as-is to answer several important classes of queries, such as triangle enumeration, Pagerank and shortest paths. We then propose algorithm LDME, a correction set-based graph summarization algorithm that produces compact output representations in a fast and scalable manner. To achieve this, we introduce (1) weighted locality sensitive hashing to drastically reduce the number of comparisons required to find good node merges, (2) an efficient way to compute the best quality merges that produces more compact outputs, and (3) a new sort-based encoding algorithm that is faster and more robust. More interestingly, our algorithm provides performance tuning settings to allow the option of trading compression for running time. On high compression settings, LDME achieves compression equal to or better than the state of the art with up to 53x speedup in running time. On high speed settings, LDME achieves up to two orders of magnitude speedup with only slightly lower compression. We also present two lossless summarization algorithms, Optimal and Scalable, for summarizing fully dynamic graphs. More concretely, we follow the framework of G-SCIS, which produces summaries that can be used as-is in several graph analytics tasks. Different from G-SCIS, which is a batch algorithm, Optimal and Scalable are fully dynamic and can respond rapidly to each change in the graph. Not only are Optimal and Scalable able to outperform G-SCIS and other batch algorithms by several orders of magnitude, but they also significantly outperform MoSSo, the state-of-the-art in lossless dynamic graph summarization. While Optimal produces always the most optimal summary, Scalable is able to trade the amount of node reduction for extra scalability. For reasonable values of the parameter $K$, Scalable is able to outperform Optimal by an order of magnitude in speed, while keeping the rate of node reduction close to that of Optimal. An interesting fact that we observed experimentally is that even if we were to run a batch algorithm, such as G-SCIS, once for every big batch of changes, still they would be much slower than Scalable. For instance, if 1 million changes occur in a graph, Scalable is two orders of magnitude faster than running G-SCIS just once at the end of the 1 million-edge sequence. / Graduate
65

Passerelle intelligente pour réseaux de capteurs sans fil contraints / Smart gateway for low-power and lossy networks

Leone, Rémy 24 July 2016 (has links)
Les réseaux de capteurs sans fil (aussi appelés LLNs en anglais) sont des réseaux contraints composés de nœuds ayant de faibles ressources (mémoire, CPU, batterie). Ils sont de nature très hétérogène et utilisés dans des contextes variés comme la domotique ou les villes intelligentes. Pour se connecter nativement à l’Internet, un LLN utilise une passerelle, qui a une vue précise du trafic transitant entre Internet et le LLN du fait de sa position. Le but de cette thèse est d’exposer comment des fonctionnalités peuvent être ajoutées à une passerelle d’un LLN dans le but d’optimiser l’utilisation des ressources limitées des nœuds contraints et d’améliorer la connaissance de leur état de fonctionnement. La première contribution est un estimateur non intrusif utilisant le trafic passant par la passerelle pour inférer l’utilisation de la radio des nœuds contraints. La seconde contribution adapte la durée de vie d’informations mises en cache (afin d’utiliser les ressources en cache au lieu de solliciter le réseau) en fonction du compromis entre le coût et l’efficacité. Enfin, la troisième contribution est Makesense, un framework permettant de documenter, d’exécuter et d’analyser une expérience pour réseaux de capteurs sans fil de façon reproductible à partir d’une description unique. / Low-Power and Lossy Network (LLN)s are constrained networks composed by nodes with little resources (memory, CPU, battery). Those networks are typically used to provide real-time measurement of their environment in various contexts such as home automation or smart cities. LLNs connect to other networks by using a gateway that can host various enhancing features due to its key location between constrained and unconstrained devices. This thesis shows three contributions aiming to improve the reliability and performance of a LLN by using its gateway. The first contribution introduce a non-intrusive estimator of a node radio usage by observing its network traffic passing through the gateway. The second contribution offers to determine the validity time of an information within a cache placed at the gateway to reduce the load on LLNs nodes by doing a trade-off between energy cost and efficiency. Finally, we present Makesense, an open source framework for reproducible experiments that can document, execute and analyze a complete LLN experiment on simulation or real nodes from a unique description.
66

PCA and JPEG2000-based Lossy Compression for Hyperspectral Imagery

Zhu, Wei 30 April 2011 (has links)
This dissertation develops several new algorithms to solve existing problems in practical application of the previously developed PCA+JPEG2000, which has shown superior rate-distortion performance in hyperspectral image compression. In addition, a new scheme is proposed to facilitate multi-temporal hyperspectral image compression. Specifically, the uniqueness in each algorithm is described as follows. 1. An empirical piecewise linear equation is proposed to estimate the optimal number of major principal components (PCs) used in SubPCA+JPEG2000 for AVIRIS data. Sensor-specific equations are presented with excellent fitting performance for AVIRIS, HYDICE, and HyMap data. As a conclusion, a general guideline is provided for finding sensor-specific piecewise linear equations. 2. An anomaly-removal-based hyperspectral image compression algorithm is proposed. It preserves anomalous pixels in a lossless manner, and yields the same or even improved rate-distortion performance. It is particularly useful to SubPCA+JPEG2000 when compressing data with anomalies that may reside in minor PCs. 3. A segmented PCA-based PCA+JPEG2000 compression algorithm is developed, which spectrally partitions an image based on its spectral correlation coefficients. This compression scheme greatly improves the rate-distortion performance of PCA+JPEG2000 when the spatial size of the data is relatively smaller than its spectral size, especially at low bitrates. A sensor-specific partition method is also developed for fast processing with suboptimal performance. 4. A joint multi-temporal image compression scheme is proposed. The algorithm preserves change information in a lossless fashion during the compression. It can yield perfect change detection with slightly degraded rate-distortion performance.
67

Příspěvek k optimální syntéze filtračních obvodů / A Contribution to Optimal Synthesis of Filters

Szabó, Zoltán January 2012 (has links)
The presented dissertation thesis is focused on the optimization of filtering circuit synthesis. In the five main sections of this work, the author analyzes partial problems related to several areas within the synthesis of modern filtering circuits. The first chapter constitutes an examination of elementary aspects which characterize present-day integrated circuits in voltage feedback operational amplifiers, and this main content is further complemented with a view on possible application of these circuits for the designing of filtering circuits as proposed within subsequent parts of the thesis. In this context, the second chapter contains a description of the design and optimization of digitally controlled universal filters tunable by means of digital potentiometers originally produced for audio technology. These digitally controlled circuits are increasingly utilized as analog preprocessing blocks in digital signal processing systems. The most extensive portion of the dissertation is dedicated to a complex analysis of individual configurations of economical, purposely lossy active function blocks and modern voltage operational amplifiers. This part of the thesis aims at providing a detailed insight into the characteristics of individual configurations of examined circuits; furthermore, in this respect, the author proposes a comparison of various application possibilities relating to these circuits and their wider use in the field of active frequency filters optimization. The described section of the work also includes a definition and examples of application of the designed and realized program, which facilitates significant simplification of purposely lossy ARC filters. In the penultimate part of the dissertation thesis, the design, development, and verification of a suitable synthesis procedure are presented together with the optimization of data and (in particular) power models of EMC filters. Based on the verification of characteristics inherent with the designed models of EMC filters, the suggested measuring procedure related to these filters is described, including the design of a station for measuring elementary parameters of line anti-interference devices up to very high frequencies. In the last section of the thesis, the author discusses the procedure of air ions concentration measurement through an aspiration condenser and analyzes the systematic and random errors as well as the optimization of filtration characteristics of the applied measurement method. This part includes the description of the design and characteristics of the realized fully automated measurement system with an aspiration condenser.
68

Designing a Novel RPL Objective Function & Testing RPL Objective Functions Performance

Mardini, Khalil, Abdulsamad, Emad January 2023 (has links)
The use of Internet of Things systems has increased to meet the need for smart systems in various fields, such as smart homes, intelligent industries, medical systems, agriculture, and the military. IoT networks are expanding daily to include hundreds and thousands of IoT devices, which transmit information through other linked devices to reach the network sink or gateway. The information follows different routes to the network sink. Finding an ideal routing solution is a big challenge due to several factors, such as power, computation, storage, and memory limitation for IoT devices. In 2011, A new standardized routing protocol for low-power and lossy networks was released by the Internet Engineering task force (IETF). The IETF adopted a distance vector routing algorithm for the RPL protocol. RPL protocol utilizes the objective functions (OFs) to select the path depending on diffident metrics.These OFs with different metrics must be evaluated and tested to develop the best routing solution.This project aims to test the performance of standardized RPL objective functions in a simulation environment. Afterwards, a new objective function with a new metric will be implemented and tested in the same environmental conditions. The performance results of the standard objective functions and the newly implemented objective function will be analyzed and compared to evaluate whether the standard objective functions or the new objective function is better as a routing solution for the IoT devices network.
69

Integrated Frequency-Selective Conduction Transmission-Line EMI Filter

Liang, Yan 20 March 2009 (has links)
The multi-conductor lossy transmission-line model and finite element simulation tool are used to analyze the high-frequency attenuator and the DM transmission-line EMI filter. The insertion gain, transfer gain, current distribution, and input impedance of the filter under a nominal design are discussed. In order to apply the transmission-line EMI filter to power electronics systems, the performance of the filter under different dimensions, material properties, and source and load impedances must be known. The influences of twelve parameters of the DM transmission-line EMI filter on the cut-off frequency, the roll-off slope, and other characteristics of the insertion gain and transfer gain curves are investigated. The most influential parameters are identified. The current sharing between the copper and nickel conductors under different parameters are investigated. The performance of the transmission-line EMI filter under different source and load impedances is also explored. The measurement setups of the DM transmission-line EMI filter using a network analyzer have been discussed. The network analyzer has a common-ground problem that influences the measured results of the high-frequency attenuator. However, the common-ground problem has a negligible influence on the measured results of the DM transmission-line EMI filter. The connectors and copper strips between the connectors and the filter introduce parasitic inductance to the measurement setup. Both simulated and measured results show that transfer gain curve is very sensitive to the parasitic inductance. However, the insertion gain curve is not sensitive to the parasitic inductance. There are two major methods to reduce the parasitic inductance of the measurement setup: using small connectors and applying a four-terminal measurement setup. The transfer gain curves of three measurement setups are compared: the two-terminal measurement setup with BNC connectors, the two-terminal measurement setup with Sub Miniature version B (SMB) connectors, and the four-terminal measurement setup with SMB connectors. The four-terminal measurement setup with SMB connectors is the most accurate one and is applied for all the transfer gain measurements in this dissertation. This dissertation also focuses on exploring ways to improve the performance of the DM transmission-line EMI filter. Several improved structures of the DM transmission-line EMI filter are investigated. The filter structure without insulation layer can greatly reduce the thickness of the filter without changing its performance. The meander structure can increase the total length of the filter without taking up too much space and results in the cut-off frequency being shifted lower and achieving more attenuation. A prototype of the two-dielectric-layer filter structure is built and measured. The measurement result confirms that a multi-dielectric-layer structure is an effective way to achieve a lower cut-off frequency and more attenuation. This dissertation proposes a broadband DM EMI filter combining the advantages of the discrete reflective LC EMI filter and the transmission-line EMI filter. Two DM absorptive transmission-line EMI filters take the place of the two DM capacitors in the discrete reflective LC EMI filter. The measured insertion gain of the prototype has a large roll-off slope at low frequencies and large attenuation at high frequencies. The dependence of the broadband DM EMI filter on source and load impedances is also investigated. Larger load (source) impedance gives more attenuation no matter it is resistive, inductive or capacitive. The broadband DM EMI filter always has more high-frequency attenuation than the discrete reflective LC EMI filter under different load (source) impedances. / Ph. D.
70

Modern Error Control Codes and Applications to Distributed Source Coding

Sartipi, Mina 15 August 2006 (has links)
This dissertation first studies two-dimensional wavelet codes (TDWCs). TDWCs are introduced as a solution to the problem of designing a 2-D code that has low decoding- complexity and has the maximum erasure-correcting property for rectangular burst erasures. The half-rate TDWCs of dimensions N<sub>1</sub> X N<sub>2</sub> satisfy the Reiger bound with equality for burst erasures of dimensions N<sub>1</sub> X N<sub>2</sub>/2 and N<sub>1</sub>/2 X N<sub>2</sub>, where GCD(N<sub>1</sub>,N<sub>2</sub>) = 2. Examples of TDWC are provided that recover any rectangular burst erasure of area N<sub>1</sub>N<sub>2</sub>/2. These lattice-cyclic codes can recover burst erasures with a simple and efficient ML decoding. This work then studies the problem of distributed source coding for two and three correlated signals using channel codes. We propose to model the distributed source coding problem with a set of parallel channel that simplifies the distributed source coding to de- signing non-uniform channel codes. This design criterion improves the performance of the source coding considerably. LDPC codes are used for lossless and lossy distributed source coding, when the correlation parameter is known or unknown at the time of code design. We show that distributed source coding at the corner point using LDPC codes is simplified to non-uniform LDPC code and semi-random punctured LDPC codes for a system of two and three correlated sources, respectively. We also investigate distributed source coding at any arbitrary rate on the Slepian-Wolf rate region. This problem is simplified to designing a rate-compatible LDPC code that has unequal error protection property. This dissertation finally studies the distributed source coding problem for applications whose wireless channel is an erasure channel with unknown erasure probability. For these application, rateless codes are better candidates than LDPC codes. Non-uniform rateless codes and improved decoding algorithm are proposed for this purpose. We introduce a reliable, rate-optimal, and energy-efficient multicast algorithm that uses distributed source coding and rateless coding. The proposed multicast algorithm performs very close to network coding, while it has lower complexity and higher adaptability.

Page generated in 0.0368 seconds