• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 10
  • 9
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 101
  • 101
  • 28
  • 22
  • 18
  • 16
  • 13
  • 11
  • 11
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Parallel lossless data compression based on the Burrows-Wheeler Transform /

Gilchrist, Jeffrey S. January 1900 (has links)
Thesis (M.App.Sc.) - Carleton University, 2007. / Includes bibliographical references (p. 99-103). Also available in electronic format on the Internet.
42

Succinct Data Structures

Gupta, Ankur, January 2007 (has links)
Thesis (Ph. D.)--Duke University, 2007.
43

New test vector compression techniques based on linear expansion

Chakravadhanula, Krishna V. Touba, Nur A., January 2004 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2004. / Supervisor: Nur Touba. Vita. Includes bibliographical references.
44

Using semantic knowledge to improve compression on log files

Otten, Frederick John 19 November 2008 (has links)
With the move towards global and multi-national companies, information technology infrastructure requirements are increasing. As the size of these computer networks increases, it becomes more and more difficult to monitor, control, and secure them. Networks consist of a number of diverse devices, sensors, and gateways which are often spread over large geographical areas. Each of these devices produce log files which need to be analysed and monitored to provide network security and satisfy regulations. Data compression programs such as gzip and bzip2 are commonly used to reduce the quantity of data for archival purposes after the log files have been rotated. However, there are many other compression programs which exist - each with their own advantages and disadvantages. These programs each use a different amount of memory and take different compression and decompression times to achieve different compression ratios. System log files also contain redundancy which is not necessarily exploited by standard compression programs. Log messages usually use a similar format with a defined syntax. In the log files, all the ASCII characters are not used and the messages contain certain "phrases" which often repeated. This thesis investigates the use of compression as a means of data reduction and how the use of semantic knowledge can improve data compression (also applying results to different scenarios that can occur in a distributed computing environment). It presents the results of a series of tests performed on different log files. It also examines the semantic knowledge which exists in maillog files and how it can be exploited to improve the compression results. The results from a series of text preprocessors which exploit this knowledge are presented and evaluated. These preprocessors include: one which replaces the timestamps and IP addresses with their binary equivalents and one which replaces words from a dictionary with unused ASCII characters. In this thesis, data compression is shown to be an effective method of data reduction producing up to 98 percent reduction in filesize on a corpus of log files. The use of preprocessors which exploit semantic knowledge results in up to 56 percent improvement in overall compression time and up to 32 percent reduction in compressed size. / TeX / pdfTeX-1.40.3
45

Pseudo-random access compressed archive for security log data

Radley, Johannes Jurgens January 2015 (has links)
We are surrounded by an increasing number of devices and applications that produce a huge quantity of machine generated data. Almost all the machine data contains some element of security information that can be used to discover, monitor and investigate security events.The work proposes a pseudo-random access compressed storage method for log data to be used with an information retrieval system that in turn provides the ability to search and correlate log data and the corresponding events. We explain the method for converting log files into distinct events and storing the events in a compressed file. This yields an entry identifier for each log entry that provides a pointer that can be used by indexing methods. The research also evaluates the compression performance penalties encountered by using this storage system, including decreased compression ratio, as well as increased compression and decompression times.
46

Adaptive compression coding

Nasiopoulos, Panagiotis January 1988 (has links)
An adaptive image compression coding technique, ACC, is presented. This algorithm is shown to preserve edges and give better quality decompressed pictures and better compression ratios than that of the Absolute Moment Block Truncation Coding. Lookup tables are used to achieve better compression rates without affecting the visual quality of the reconstructed image. Regions with approximately uniform intensities are successfully detected by using the range and these regions are approximated by their average. This procedure leads to further reduction in the compression data rates. A method for preserving edges is introduced. It is shown that as more details are preserved around edges the pictorial results improve dramatically. The ragged appearance of the edges in AMBTC is reduced or eliminated, leading to images far superior than those of AMBTC. For most of the images ACC yields Root Mean Square Error smaller than that obtained by AMBTC. Decompression time is shown to be comparable to that of AMBTC for low threshold values and becomes significantly lower as the compression rate becomes smaller. An adaptive filter is introduced which helps recover lost texture at very low compression rates (0.8 to 0.6 b/p, depending on the degree of texture in the image). This algorithm is easy to implement since no special hardware is needed. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
47

Studies on probabilistic tensor subspace learning

Zhou, Yang 04 January 2019 (has links)
Most real-world data such as images and videos are naturally organized as tensors, and often have high dimensionality. Tensor subspace learning is a fundamental problem that aims at finding low-dimensional representations from tensors while preserving their intrinsic characteristics. By dealing with tensors in the learned subspace, subsequent tasks such as clustering, classification, visualization, and interpretation can be greatly facilitated. This thesis studies the tensor subspace learning problem from a generative perspective, and proposes four probabilistic methods that generalize the ideas of classical subspace learning techniques for tensor analysis. Probabilistic Rank-One Tensor Analysis (PROTA) generalizes probabilistic principle component analysis. It is flexible in capturing data characteristics, and avoids rotational ambiguity. For robustness against overfitting, concurrent regularizations are further proposed to concurrently and coherently penalize the whole subspace, so that unnecessary scale restrictions can be relaxed in regularizing PROTA. Probabilistic Rank-One Discriminant Analysis (PRODA) is a bilinear generalization of probabilistic linear discriminant analysis. It learns a discriminative subspace by representing each observation as a linear combination of collective and individual rank-one matrices. This provides PRODA with both the expressiveness of capturing discriminative features and non-discriminative noise, and the capability of exploiting the (2D) tensor structures. Bilinear Probabilistic Canonical Correlation Analysis (BPCCA) generalizes probabilistic canonical correlation analysis for learning correlations between two sets of matrices. It is built on a hybrid Tucker model in which the two-view matrices are combined in two stages via matrix-based and vector-based concatenations, respectively. This enables BPCCA to capture two-view correlations without breaking the matrix structures. Bayesian Low-Tubal-Rank Tensor Factorization (BTRTF) is a fully Bayesian treatment of robust principle component analysis for recovering tensors corrupted with gross outliers. It is based on the recently proposed tensor-SVD model, and has more expressive modeling power in characterizing tensors with certain orientation such as images and videos. A novel sparsity-inducing prior is also proposed to provide BTRTF with automatic determination of the tensor rank (subspace dimensionality). Comprehensive validations and evaluations are carried out on both synthetic and real-world datasets. Empirical studies on parameter sensitivities and convergence properties are also provided. Experimental results show that the proposed methods achieve the best overall performance in various applications such as face recognition, photograph-sketch match, and background modeling. Keywords: Tensor subspace learning, probabilistic models, Bayesian inference, tensor decomposition.
48

The use of context in text compression /

Reich, Edwina Helen. January 1984 (has links)
No description available.
49

EFFICIENT AND SECURE IMAGE AND VIDEO PROCESSING AND TRANSMISSION IN WIRELESS SENSOR NETWORKS

Assegie, Samuel January 2010 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Sensor nodes forming a network and using wireless communications are highly useful in a variety of applications including battle field (military) surveillance, building security, medical and health services, environmental monitoring in harsh conditions, for scientific investigations on other planets, etc. But these wireless sensors are resource constricted: limited power supply, bandwidth for communication, processing speed, and memory space. One possible way of achieve maximum utilization of those constrained resource is applying signal processing and compressing the sensor readings. Usually, processing data consumes much less power than transmitting data in wireless medium, so it is effective to apply data compression by trading computation for communication before transmitting data for reducing total power consumption by a sensor node. However the existing state of the art compression algorithms are not suitable for wireless sensor nodes due to their limited resource. Therefore there is a need to design signal processing (compression) algorithms considering the resource constraint of wireless sensors. In our work, we designed a lightweight codec system aiming surveillance as a target application. In designing the codec system, we have proposed new design ideas and also tweak the existing encoding algorithms to fit the target application. Also during data transmission among sensors and between sensors and base station, the data has to be secured. We have addressed some security issues by assessing the security of wavelet tree shuffling as the only security mechanism.
50

Optimizing bandwidth of tactical communications systems

Cox, Criston W. 06 1900 (has links)
Current tactical networks are oversaturated, often slowing systems down to unusable speeds. Utilizing data collected from major exercises and Operation Iraqi Freedom II (OIF II), a typical model of existing tactical network performance is modeled and analyzed using NETWARS, a DISA sponsored communication systems modeling and simulation program. Optimization technologies are then introduced, such as network compression, caching, Quality of Service (QoS), and the Space Communication Protocol Standards Transport Protocol (SCPS-TP). The model is then altered to reflect an optimized system, and simulations are run for comparison. Data for the optimized model was obtained by testing commercial optimization products known as Protocol Enhancement Proxies ( Support Activity (MCTSSA) testing laboratory.

Page generated in 0.1286 seconds