• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 1
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

PERFORMANCE AND COMPLEXITY CO-EVALUATIONS OF MPEG4-ALS COMPRESSION STANDARD FOR LOW-LATENCY MUSIC COMPRESSION

Matthew, Isaac Kevin 26 September 2008 (has links)
No description available.
12

Evaluating and Implementing JPEG XR Optimized for Video Surveillance

Yu, Lang January 2010 (has links)
<p>This report describes both evaluation and implementation of the new coming image compression standard JPEG XR. The intention is to determine if JPEG XR is an appropriate standard for IP based video surveillance purposes. Video surveillance, especially IP based video surveillance, currently has an increasing role in the security market. To be a good standard for surveillance, the video stream generated by the camera is required to be low bit-rate, low latency on the network and at the same time keep a high dynamic display range. The thesis start with a deep insightful study of JPEG XR encoding standard. Since the standard could have different settings,optimized settings are applied to JPEG XR encoder to fit the requirement of network video surveillance. Then, a comparative evaluation of the JPEG XR versusthe JPEG is delivered both in terms of objective and subjective way. Later, part of the JPEG XR encoder is implemented in hardware as an accelerator for further evaluation. SystemVerilog is the coding language. TSMC 40nm process library and Synopsys ASIC tool chain are used for synthesize. The throughput, area, power ofthe encoder are given and analyzed. Finally, the system integration of the JPEGXR hardware encoder to Axis ARTPEC-X SoC platform is discussed.</p>
13

Evaluating and Implementing JPEG XR Optimized for Video Surveillance

Yu, Lang January 2010 (has links)
This report describes both evaluation and implementation of the new coming image compression standard JPEG XR. The intention is to determine if JPEG XR is an appropriate standard for IP based video surveillance purposes. Video surveillance, especially IP based video surveillance, currently has an increasing role in the security market. To be a good standard for surveillance, the video stream generated by the camera is required to be low bit-rate, low latency on the network and at the same time keep a high dynamic display range. The thesis start with a deep insightful study of JPEG XR encoding standard. Since the standard could have different settings,optimized settings are applied to JPEG XR encoder to fit the requirement of network video surveillance. Then, a comparative evaluation of the JPEG XR versusthe JPEG is delivered both in terms of objective and subjective way. Later, part of the JPEG XR encoder is implemented in hardware as an accelerator for further evaluation. SystemVerilog is the coding language. TSMC 40nm process library and Synopsys ASIC tool chain are used for synthesize. The throughput, area, power ofthe encoder are given and analyzed. Finally, the system integration of the JPEGXR hardware encoder to Axis ARTPEC-X SoC platform is discussed.
14

Evaluation and Hardware Implementation of Real-Time Color Compression Algorithms

Ojani, Amin, Caglar, Ahmet January 2008 (has links)
A major bottleneck, for performance as well as power consumption, for graphics hardware in mobile devices is the amount of data that needs to be transferred to and from memory. In, for example, hardware accelerated 3D graphics, a large part of the memory accesses are due to large and frequent color buffer data transfers. In a graphic hardware block color data is typically processed using RGB color format. For both 3D graphic rasterization and image composition several pixels needs to be read from and written to memory to generate a pixel in the frame buffer. This generates a lot of data traffic on the memory interfaces which impacts both performance and power consumption. Therefore it is important to minimize the amount of color buffer data. One way of reducing the memory bandwidth required is to compress the color data before writing it to memory and decompress it before using it in the graphics hardware block. This compression/decompression must be done “on-the-fly”, i.e. it has to be very fast so that the hardware accelerator does not have to wait for data. In this thesis, we investigated several exact (lossless) color compression algorithms from hardware implementation point of view to be used in high throughput hardware. Our study shows that compression/decompression datapath is well implementable even with stringent area and throughput constraints. However memory interfacing of these blocks is more critical and could be dominating.
15

Evaluation of Idempotency & Block Size of Data on the Performance of Normalized Compression Distance Algorithm

Mandhapati, Venkata Srikanth, Bajwa, Kamran Ali January 2012 (has links)
Normalized compression distance (NCD) is a similarity distance metric algorithm which is used for the purpose of analyzing the type of file fragments. The performance of NCD depends upon underlying compression algorithm to be used. We have studied three compressors bzip2, gzip and ppmd, the compression ratio of ppmd is better than bzip2 and the compression ratio of bzip2 is better than gzip, but which one out of these three is better than one another in the viewpoint of idempotency is evaluated by us. Then we have applied NCD along with k nearest neighbour as a classification algorithm to a randomly selected public corpus data with different block sizes (512 byte, 1024 bytes, 1536 bytes, 2048 bytes). The performance of two compressors bzip2 and gzip is also compared for the NCD algorithm in the perspective of idempotency. Objectives: In this study we have investigated the In this study we have investigated the combine effect of both of the parameters namely compression ratio versus idempotency and varying block size of data on the performance of NCD. The objective is to figure out that in order to have a better performance of NCD either a compressor for NCD should be selected on the basis of better compression ratio of compressors or better idempotency of compressors. The whole purpose of using different block sizes was to evaluate either the performance of NCD will improve or not by varying the block size of data to be used for making the datasets. Methods: Experiments are performed to test the hypotheses and evaluate the effect of compression ratio versus idempotency and block size of data on the performance of NCD. Results: The results obtained after the analysis of null hypotheses of main experiment are retained, which showed that there is no statistically significant difference on the performance of NCD when varying block size of data is used and also there is no statistically significant difference on the NCD’s performance when a compressor is selected for NCD on the basis of better compression ratio or better idempotency. Conclusions: As the results obtained from the experiments are unable to reject the null hypotheses of main experiment so no conclusion could be drawn of the effect of the independent variables on the dependent variable i.e. there is no statistically significant effect of compression ratio versus idempotency and varying block size of data on performance of the NCD.
16

Lossless Image compression using MATLAB : Comparative Study

Kodukulla, Surya Teja January 2020 (has links)
Context: Image compression is one of the key and important applicationsin commercial, research, defence and medical fields. The largerimage files cannot be processed or stored quickly and efficiently. Hencecompressing images while maintaining the maximum quality possibleis very important for real-world applications. Objectives: Lossy compression is widely popular for image compressionand used in commercial applications. In order to perform efficientwork related to images, the quality in many situations needs to be highwhile having a comparatively low file size. Hence lossless compressionalgorithms are used in this study to compare the lossless algorithmsand to check which algorithm makes the compression retaining thequality with decent compression ratio. Method: The lossless algorithms compared are LZW, RLE, Huffman,DCT in lossless mode, DWT. The compression techniques areimplemented in MATLAB by using image processing toolbox. Thecompressed images are compared for subjective image quality. The imagesare compressed with emphasis on maintaining the quality ratherthan focusing on diminishing file size. Result: The LZW algorithm compression produces binary imagesfailing in this implementation to produce a lossless image. Huffmanand RLE algorithms produce similar results with compression ratiosin the range of 2.5 to 3.7, and the algorithms are based on redundancyreduction. The DCT and DWT algorithms compress every elementin the matrix defined for the images maintaining lossless quality withcompression ratios in the range 2 to 3.5. Conclusion: The DWT algorithm is best suitable for a more efficientway to compress an image in a lossless technique. As the wavelets areused in this compression, all the elements in the image are compressedwhile retaining the quality. The Huffman and RLE produce losslessimages, but for a large variety of images, some of the images may notbe compressed with complete efficiency.
17

[en] DEVELOPMENT AND EXPERIMENTAL EVALUATION OF A ROTARY INTERNAL COMBUSTION ENGINE / [pt] DESENVOLVIMENTO E AVALIAÇÃO EXPERIMENTAL DE UM MOTOR A COMBUSTÃO INTERNA ROTATIVO

FILIPE TEIXEIRA DE FREITAS E SILVA 20 June 2018 (has links)
[pt] No presente trabalho foi realizada a construção, montagem, revisão de projeto e avaliação experimental preliminar de um novo motor a combustão interna rotativo por ignição por centelha, que pode ser classificado como cat-and-mouse engine ou Twin-Rotor Piston Engine. Nesse motor, dois pares de deslocadores são montados sobre dois rotores, que giram em velocidade variável em dentro de uma câmara cilindrica, de forma a conferir uma variação da posição angular relativa entre deslocadores e, assim, formar quatro câmaras de volumes variáveis com o tempo, a fim de se realizar processos termodinâmicos equivalentes aos de um motor alternativo de quatro tempos. Esse motor destaca-se por possuir um sistema inovador que permite a mudança do movimento dos rotores e deslocadores, de forma a aumentar o volume deslocado e a taxa de compressão das câmaras onde ocorrem os processos termodinâmicos. Tal dispositivo permite alterar e otimizar a taxa de compressão para diferentes combustíveis. Os componentes do motor foram usinados de acordo com o projeto e o protótipo foi montado, revisado e ajustado, de forma a garantir a operacionalidade do equipamento. Posteriormente, o motor foi montado em uma bancada para se efetuar testes preliminares de acionamento externo, afim de se medir vazão volumétrica, potência fornecida e pressão de compressão no ponto morto superior em função da velocidade angular. A revisão bibliográfica do trabalho contém definições úteis na classificação de motores rotativos, além de discutir suas especificidades características. / [en] The present work describes the construction, assembly, project revision and preliminary experimental evaluation of an innovative rotary spark ignition internal combustion engine. First, a literature survey was carried out. Some useful definitions were found for rotary engines classification as well as some of their specific characteristics were discussed. The engine can be classified as cat-andmouse engine or Twin-Rotor Piston Engine. It is characterized by two pairs of displacers, assembled over two rotors, which rotate at a variable rotational speed within a cylindrical cavity. The driving mechanism is such that the relative distance between each pair of displacers varies continuously, thus providing the positive displacement effect. Therefore, the engine has four chambers, each one with its own time varying volume, so that thermodynamic processes, equivalent to those of a four-stroke reciprocating internal combustion engine, can take place. This engine presents a unique and innovative mechanism by which the compression ratio can be varied during operation, thus optimizing engine efficiency a for a given fuel. Engine components, designed in an effort previous to the present one, were fabricated according to the original project. A prototype was assembled, with all components following a routine of project revision, including measurements, uncertainties and adjustments. The engine was then placed on a test bench where preliminary non-firing external driving tests were carried out. They included: volumetric flow rate, driving (frictional) power and cylinder maximum pressure with displacer at the top dead center, all these parameters in terms of the primary shaft angular velocity.
18

Komprese signálu EKG / Compression of ECG signal

Blaschová, Eliška January 2016 (has links)
This paper represents the most well-known compression methods, which have been published. A Compression of ECG signal is important primarily for space saving in memory cards or efficiency improvement of data transfer. An application of wavelet transform for compression is a worldwide discussed topic and this is the reason why the paper focuses in this direction. Gained wavelet coefficients might be firstly quantized and then compressed using suitable method. There are many options for a selection of wavelet and a degree of decomposition, which will be tested from the point of view of the most efficient compression of ECG signal.
19

Development Of A Single Cylinder SI Engine For 100% Biogas Operation

Kapadia, Bhavin Kanaiyalal 03 1900 (has links)
This work concerns a systematic study of IC engine operation with 100% biogas as fuel (as opposed to the dual-fuel mode) with particular emphasis on operational issues and the quest for high efficiency strategies. As a first step, a commercially available 1.2 kW genset engine is modified for biogas operation. The conventional premixing of air and biogas is compared with a new manifold injection strategy. The effect of biogas composition on engine performance is also studied. Results from the genset engine study indicate a very low overall efficiency of the system. This is mainly due to the very low compression ratio (4.5) of the engine. To gain further insight into factors that contribute to this low efficiency, thermodynamic engine simulations are conducted. Reasonable agreement with experiments is obtained after incorporating estimated combustion durations. Subsequently, the model is used as a tool to predict effect of different parameters such as compression ratio, spark timing and combustion durations on engine performance and efficiency. Simulations show that significant improvement in performance can be obtained at high compression ratios. As a step towards developing a more efficient system and based on insight obtained from simulations, a high compression ratio (9.2) engine is selected. This engine is coupled to a 3 kW alternator and operated on 100% biogas. Both strategies, i.e., premixing and manifold injection are implemented. The results show very high overall (chemical to electrical) efficiencies with a maximum value of 22% at 1.4 kW with the manifold injection strategy. The new manifold injection strategy proposed here is found to be clearly superior to the conventional premixing method. The main reasons are the higher volumetric efficiency (25% higher than that for the premixing mode of supply) and overall lean operation of the engine across the entire load range. Predictions show excellent agreement with measurements, enabling the model to be used as a tool for further study. Simulations suggest that a higher compression ratio (up to 13) and appropriate spark advance can lead to higher engine power output and efficiency.
20

How to Juggle Columns: An Entropy-Based Approach for Table Compression

Paradies, Marcus, Lemke, Christian, Plattner, Hasso, Lehner, Wolfgang, Sattler, Kai-Uwe, Zeier, Alexander, Krueger, Jens 25 August 2022 (has links)
Many relational databases exhibit complex dependencies between data attributes, caused either by the nature of the underlying data or by explicitly denormalized schemas. In data warehouse scenarios, calculated key figures may be materialized or hierarchy levels may be held within a single dimension table. Such column correlations and the resulting data redundancy may result in additional storage requirements. They may also result in bad query performance if inappropriate independence assumptions are made during query compilation. In this paper, we tackle the specific problem of detecting functional dependencies between columns to improve the compression rate for column-based database systems, which both reduces main memory consumption and improves query performance. Although a huge variety of algorithms have been proposed for detecting column dependencies in databases, we maintain that increased data volumes and recent developments in hardware architectures demand novel algorithms with much lower runtime overhead and smaller memory footprint. Our novel approach is based on entropy estimations and exploits a combination of sampling and multiple heuristics to render it applicable for a wide range of use cases. We demonstrate the quality of our approach by means of an implementation within the SAP NetWeaver Business Warehouse Accelerator. Our experiments indicate that our approach scales well with the number of columns and produces reliable dependence structure information. This both reduces memory consumption and improves performance for nontrivial queries.

Page generated in 0.0263 seconds