• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 13
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 61
  • 42
  • 20
  • 17
  • 12
  • 12
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Digitally Controlled DC-DC Buck Converters with Lossless Current Sensing

January 2011 (has links)
abstract: Current sensing ability is one of the most desirable features of contemporary current or voltage mode controlled DC-DC converters. Current sensing can be used for over load protection, multi-stage converter load balancing, current-mode control, multi-phase converter current-sharing, load independent control, power efficiency improvement etc. There are handful existing approaches for current sensing such as external resistor sensing, triode mode current mirroring, observer sensing, Hall-Effect sensors, transformers, DC Resistance (DCR) sensing, Gm-C filter sensing etc. However, each method has one or more issues that prevent them from being successfully applied in DC-DC converter, e.g. low accuracy, discontinuous sensing nature, high sensitivity to switching noise, high cost, requirement of known external power filter components, bulky size, etc. In this dissertation, an offset-independent inductor Built-In Self Test (BIST) architecture is proposed which is able to measure the inductor inductance and DCR. The measured DCR enables the proposed continuous, lossless, average current sensing scheme. A digital Voltage Mode Control (VMC) DC-DC buck converter with the inductor BIST and current sensing architecture is designed, fabricated, and experimentally tested. The average measurement errors for inductance, DCR and current sensing are 2.1%, 3.6%, and 1.5% respectively. For the 3.5mm by 3.5mm die area, inductor BIST and current sensing circuits including related pins only consume 5.2% of the die area. BIST mode draws 40mA current for a maximum time period of 200us upon start-up and the continuous current sensing consumes about 400uA quiescent current. This buck converter utilizes an adaptive compensator. It could update compensator internally so that the overall system has a proper loop response for large range inductance and load current. Next, a digital Average Current Mode Control (ACMC) DC-DC buck converter with the proposed average current sensing circuits is designed and tested. To reduce chip area and power consumption, a 9 bits hybrid Digital Pulse Width Modulator (DPWM) which uses a Mixed-mode DLL (MDLL) is also proposed. The DC-DC converter has a maximum of 12V input, 1-11 V output range, and a maximum of 3W output power. The maximum error of one least significant bit (LSB) delay of the proposed DPWM is less than 1%. / Dissertation/Thesis / Ph.D. Electrical Engineering 2011
32

Visually Lossless JPEG 2000 for Remote Image Browsing

Oh, Han, Bilgin, Ali, Marcellin, Michael 15 July 2016 (has links)
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of ( spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting ( multiple) quantization step sizes into a single JPEG 2000 codestream. This codestream is JPEG 2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG 2000 Interactive Protocol ( JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results.
33

Comparison of lossy and lossless compression algorithms for time series data in the Internet of Vehicles / Jämförelse av destruktiva och icke-förstörande komprimeringsalgorithmer för tidsseriedata inom fordonens internet

Hughes, Joseph January 2023 (has links)
As automotive development advances, connectivity features are continually added to vehicles that, in conjunction, form an Internet of Vehicles. For numerous reasons, it is vital for vehicle manufacturers to collect telemetry from their fleets. However, the volume of the generated data is too immense to feasibly be transmitted to a server due to CPU and memory limitations of embedded hardware and the monetary cost of cellular network usage. The purpose of this thesis is thus to investigate how these issues can be alleviated by the use of real-time compression of time series data before off-board transmission. A hybrid approach is proposed that results in fast and effective performance on a variety of time series exhibiting varying numerical data features, all while limiting the maximum reconstruction error to a user-specified absolute value. We first perform a literature review to identify state of the art compression algorithms for time series compression that run online and provide max-error guarantees. We then choose a subset of lossless and lossy algorithms that are implemented and benchmarked with regards to their compression ratio, resource usage, and reconstruction error when used on time series that exhibit a variety of data features. Finally, we ask whether we are able to run a lossy and lossless algorithm in succession in order to further increase the compression ratio. The literature review identifies a diverse range of compression algorithms. Out of these, the algorithms Poor Man's Compression - MidRange (PMC-MR) and Swing filter are selected as lossy algorithms, and Run-length Binary Encoding (RLBE) and Gorilla are selected as lossless algorithms. The experiments yield positive results for the lossy algorithms, which excel on different data sets. These are able to achieve compression ratios between 22.0% and 99.5%, depending on the data set, while limiting the max-error to 1%. In contrast, Gorilla achieves compression ratios between 66.6% and 83.7%, outperforming RLBE in nearly all aspects. Moreover, we conclude that there is a strictly positive improvement to the compression ratio when losslessly compressing the result of lossily compressed data. When combining either PMC-MR or Swing filter with Gorilla, we achieve compression ratios between 83.1% and 99.6% across a variety of time series with a maximum error for any given data point of 1%.
34

MASS: A Multi-Axis Storage Structure for Large XML Documents

Deschler, Kurt W 06 May 2002 (has links)
Due to the wide acceptance of the Word Wide Web Consortium (W3C) XPath language specification, native indexing for XML is needed to support path expression queries efficiently. XPath describes the different document tree relationships that may be queried as a set of axes. Many recent proposals for XML indexing focus on accelerating only a small subset of expressions possible using these axes. In particular, queries by ordinal position and updates that alter document structure are not well supported. A more general indexing solution is needed that not only offers efficient evaluation of all of the XPath axes, but also allows for efficient document update. We introduce MASS, a Multiple Axis Storage Structure, to meet the performance challenge posed by the XPath language. MASS is a storage and indexing solution for large XML documents that eliminates the need for external secondary storage. It is designed around the XPath language, providing efficient interfaces for evaluating all XPath axes. The clustered organization of MASS allows several different axes to be evaluated using the same index structure. The clustering, in conjunction with an internal compression mechanism exploiting specific XML characteristics, keep the size of the structure small which further aids efficiency. MASS introduces a versatile scheme for representing document node relationships that always allows for efficient updates. Finally, the integration of a ranked B+ tree allows MASS to efficiently evaluate XPath axes in large documents. We have implemented MASS in C++ and measured the performance of many different XPath expressions and document updates. Our experimental evaluation illustrates that MASS exhibits excellent performance characteristics for both queries and updates and scales well to large documents, making it a practical solution for XML storage. In conjunction with text indexing, MASS provides a complete solution from XML indexing.
35

Novel scalable and real-time embedded transceiver system

Mohammed, Rand Basil January 2017 (has links)
Our society increasingly relies on the transmission and reception of vast amounts of data using serial connections featuring ever-increasing bit rates. In imaging systems, for example, the frame rate achievable is often limited by the serial link between camera and host even when modern serial buses with the highest bit rates are used. This thesis documents a scalable embedded transceiver system with a bandwidth and interface standard that can be adapted to suit a particular application. This new approach for a real-time scalable embedded transceiver system is referred to as a Novel Reference Model (NRM), which connects two or more applications through a transceiver network in order to provide real-time data to a host system. Different transceiver interfaces for which the NRM model has been tested include: LVDS, GIGE, PMA-direct, Rapid-IO and XAUI, one support a specific range for transceiver speed that suites a special type for transceiver physical medium. The scalable serial link approach has been extended with loss-less data compression with the aim of further increasing dataflow at a given bit rate. Two lossless compression methods were implemented, based on Huffman coding and a novel method called Reduced Lossless Compression Method (RLCM). Both methods are integrated into the scalable transceivers providing a comprehensive solution for optimal data transmission over a variety of different interfaces. The NRM is implemented on a field programmable gate array (FPGA) using a system architecture that consists of three layers: application, transport and physical. A Terasic DE4 board was used as the main platform for implementing and testing the embedded system, while Quartus-II software and tools were used to design and debug the embedded hardware systems.
36

Evaluation and Hardware Implementation of Real-Time Color Compression Algorithms

Ojani, Amin, Caglar, Ahmet January 2008 (has links)
A major bottleneck, for performance as well as power consumption, for graphics hardware in mobile devices is the amount of data that needs to be transferred to and from memory. In, for example, hardware accelerated 3D graphics, a large part of the memory accesses are due to large and frequent color buffer data transfers. In a graphic hardware block color data is typically processed using RGB color format. For both 3D graphic rasterization and image composition several pixels needs to be read from and written to memory to generate a pixel in the frame buffer. This generates a lot of data traffic on the memory interfaces which impacts both performance and power consumption. Therefore it is important to minimize the amount of color buffer data. One way of reducing the memory bandwidth required is to compress the color data before writing it to memory and decompress it before using it in the graphics hardware block. This compression/decompression must be done “on-the-fly”, i.e. it has to be very fast so that the hardware accelerator does not have to wait for data. In this thesis, we investigated several exact (lossless) color compression algorithms from hardware implementation point of view to be used in high throughput hardware. Our study shows that compression/decompression datapath is well implementable even with stringent area and throughput constraints. However memory interfacing of these blocks is more critical and could be dominating.
37

Algorithmes et structures de données compactes pour la visualisation interactive d’objets 3D volumineux / Algorithms and compact data structures for interactive visualization of gigantic 3D objects

Jamin, Clément 25 September 2009 (has links)
Les méthodes de compression progressives sont désormais arrivées à maturité (les taux de compression sont proches des taux théoriques) et la visualisation interactive de maillages volumineux est devenue une réalité depuis quelques années. Cependant, même si l’association de la compression et de la visualisation est souvent mentionnée comme perspective, très peu d’articles traitent réellement ce problème, et les fichiers créés par les algorithmes de visualisation sont souvent beaucoup plus volumineux que les originaux. En réalité, la compression favorise une taille réduite de fichier au détriment de l’accès rapide aux données, alors que les méthodes de visualisation se concentrent sur la rapidité de rendu : les deux objectifs s’opposent et se font concurrence. A partir d’une méthode de compression progressive existante incompatible avec le raffinement sélectif et interactif, et uniquement utilisable sur des maillages de taille modeste, cette thèse tente de réconcilier compression sans perte et visualisation en proposant de nouveaux algorithmes et structures de données qui réduisent la taille des objets tout en proposant une visualisation rapide et interactive. En plus de cette double capacité, la méthode proposée est out-of-core et peut traiter des maillages de plusieurs centaines de millions de points. Par ailleurs, elle présente l’avantage de traiter tout complexe simplicial de dimension n, des soupes de triangles aux maillages volumiques. / Progressive compression methods are now mature (obtained rates are close to theoretical bounds) and interactive visualization of huge meshes has been a reality for a few years. However, even if the combination of compression and visualization is often mentioned as a perspective, very few papers deal with this problem, and the files created by visualization algorithms are often much larger than the original ones. In fact, compression favors a low file size to the detriment of a fast data access, whereas visualization methods focus on rendering speed : both goals are opposing and competing. Starting from an existing progressive compression method incompatible with selective and interactive refinements and usable on small-sized meshes only, this thesis tries to reconcile lossless compression and visualization by proposing new algorithms and data structures which radically reduce the size of the objects while supporting a fast interactive navigation. In addition to this double capability, our method works out-of-core and can handle meshes containing several hundreds of millions vertices. Furthermore, it presents the advantage of dealing with any n-dimensional simplicial complex, which includes triangle soups or volumetric meshes.
38

Lossless Message Compression

Hansson, Erik, Karlsson, Stefan January 2013 (has links)
In this thesis we investigated whether using compression when sending inter-process communication (IPC) messages can be beneficial or not. A literature study on lossless compression resulted in a compilation of algorithms and techniques. Using this compilation, the algorithms LZO, LZFX, LZW, LZMA, bzip2 and LZ4 were selected to be integrated into LINX as an extra layer to support lossless message compression. The testing involved sending messages with real telecom data between two nodes on a dedicated network, with different network configurations and message sizes. To calculate the effective throughput for each algorithm, the round-trip time was measured. We concluded that the fastest algorithms, i.e. LZ4, LZO and LZFX, were most efficient in our tests. / I detta examensarbete har vi undersökt huruvida komprimering av meddelanden för interprocesskommunikation (IPC) kan vara fördelaktigt. En litteraturstudie om förlustfri komprimering resulterade i en sammanställning av algoritmer och tekniker. Från den här sammanställningen utsågs algoritmerna LZO, LZFX, LZW, LZMA, bzip2 och LZ4 för integrering i LINX som ett extra lager för att stödja komprimering av meddelanden. Algoritmerna testades genom att skicka meddelanden innehållande riktig telekom-data mellan två noder på ett dedikerat nätverk. Detta gjordes med olika nätverksinställningar samt storlekar på meddelandena. Den effektiva nätverksgenomströmningen räknades ut för varje algoritm genom att mäta omloppstiden. Resultatet visade att de snabbaste algoritmerna, alltså LZ4, LZO och LZFX, var effektivast i våra tester.
39

Codage d'images avec et sans pertes à basse complexité et basé contenu / Lossy and lossless image coding with low complexity and based on the content

Liu, Yi 18 March 2015 (has links)
Ce projet de recherche doctoral vise à proposer solution améliorée du codec de codage d’images LAR (Locally Adaptive Resolution), à la fois d’un point de vue performances de compression et complexité. Plusieurs standards de compression d’images ont été proposés par le passé et mis à profit dans de nombreuses applications multimédia, mais la recherche continue dans ce domaine afin d’offrir de plus grande qualité de codage et/ou de plus faibles complexité de traitements. JPEG fut standardisé il y a vingt ans, et il continue pourtant à être le format de compression le plus utilisé actuellement. Bien qu’avec de meilleures performances de compression, l’utilisation de JPEG 2000 reste limitée due à sa complexité plus importe comparée à JPEG. En 2008, le comité de standardisation JPEG a lancé un appel à proposition appelé AIC (Advanced Image Coding). L’objectif était de pouvoir standardiser de nouvelles technologies allant au-delà des standards existants. Le codec LAR fut alors proposé comme réponse à cet appel. Le système LAR tend à associer une efficacité de compression et une représentation basée contenu. Il supporte le codage avec et sans pertes avec la même structure. Cependant, au début de cette étude, le codec LAR ne mettait pas en oeuvre de techniques d’optimisation débit/distorsions (RDO), ce qui lui fut préjudiciable lors de la phase d’évaluation d’AIC. Ainsi dans ce travail, il s’agit dans un premier temps de caractériser l’impact des principaux paramètres du codec sur l’efficacité de compression, sur la caractérisation des relations existantes entre efficacité de codage, puis de construire des modèles RDO pour la configuration des paramètres afin d’obtenir une efficacité de codage proche de l’optimal. De plus, basée sur ces modèles RDO, une méthode de « contrôle de qualité » est introduite qui permet de coder une image à une cible MSE/PSNR donnée. La précision de la technique proposée, estimée par le rapport entre la variance de l’erreur et la consigne, est d’environ 10%. En supplément, la mesure de qualité subjective est prise en considération et les modèles RDO sont appliqués localement dans l’image et non plus globalement. La qualité perceptuelle est visiblement améliorée, avec un gain significatif mesuré par la métrique de qualité objective SSIM. Avec un double objectif d’efficacité de codage et de basse complexité, un nouveau schéma de codage LAR est également proposé dans le mode sans perte. Dans ce contexte, toutes les étapes de codage sont modifiées pour un meilleur taux de compression final. Un nouveau module de classification est également introduit pour diminuer l’entropie des erreurs de prédiction. Les expérimentations montrent que ce codec sans perte atteint des taux de compression équivalents à ceux de JPEG 2000, tout en économisant 76% du temps de codage et de décodage. / This doctoral research project aims at designing an improved solution of the still image codec called LAR (Locally Adaptive Resolution) for both compression performance and complexity. Several image compression standards have been well proposed and used in the multimedia applications, but the research does not stop the progress for the higher coding quality and/or lower coding consumption. JPEG was standardized twenty years ago, while it is still a widely used compression format today. With a better coding efficiency, the application of the JPEG 2000 is limited by its larger computation cost than the JPEG one. In 2008, the JPEG Committee announced a Call for Advanced Image Coding (AIC). This call aims to standardize potential technologies going beyond existing JPEG standards. The LAR codec was proposed as one response to this call. The LAR framework tends to associate the compression efficiency and the content-based representation. It supports both lossy and lossless coding under the same structure. However, at the beginning of this study, the LAR codec did not implement the rate-distortion-optimization (RDO). This shortage was detrimental for LAR during the AIC evaluation step. Thus, in this work, it is first to characterize the impact of the main parameters of the codec on the compression efficiency, next to construct the RDO models to configure parameters of LAR for achieving optimal or sub-optimal coding efficiencies. Further, based on the RDO models, a “quality constraint” method is introduced to encode the image at a given target MSE/PSNR. The accuracy of the proposed technique, estimated by the ratio between the error variance and the setpoint, is about 10%. Besides, the subjective quality measurement is taken into consideration and the RDO models are locally applied in the image rather than globally. The perceptual quality is improved with a significant gain measured by the objective quality metric SSIM (structural similarity). Aiming at a low complexity and efficient image codec, a new coding scheme is also proposed in lossless mode under the LAR framework. In this context, all the coding steps are changed for a better final compression ratio. A new classification module is also introduced to decrease the entropy of the prediction errors. Experiments show that this lossless codec achieves the equivalent compression ratio to JPEG 2000, while saving 76% of the time consumption in average in encoding and decoding.
40

Lossless Image compression using MATLAB : Comparative Study

Kodukulla, Surya Teja January 2020 (has links)
Context: Image compression is one of the key and important applicationsin commercial, research, defence and medical fields. The largerimage files cannot be processed or stored quickly and efficiently. Hencecompressing images while maintaining the maximum quality possibleis very important for real-world applications. Objectives: Lossy compression is widely popular for image compressionand used in commercial applications. In order to perform efficientwork related to images, the quality in many situations needs to be highwhile having a comparatively low file size. Hence lossless compressionalgorithms are used in this study to compare the lossless algorithmsand to check which algorithm makes the compression retaining thequality with decent compression ratio. Method: The lossless algorithms compared are LZW, RLE, Huffman,DCT in lossless mode, DWT. The compression techniques areimplemented in MATLAB by using image processing toolbox. Thecompressed images are compared for subjective image quality. The imagesare compressed with emphasis on maintaining the quality ratherthan focusing on diminishing file size. Result: The LZW algorithm compression produces binary imagesfailing in this implementation to produce a lossless image. Huffmanand RLE algorithms produce similar results with compression ratiosin the range of 2.5 to 3.7, and the algorithms are based on redundancyreduction. The DCT and DWT algorithms compress every elementin the matrix defined for the images maintaining lossless quality withcompression ratios in the range 2 to 3.5. Conclusion: The DWT algorithm is best suitable for a more efficientway to compress an image in a lossless technique. As the wavelets areused in this compression, all the elements in the image are compressedwhile retaining the quality. The Huffman and RLE produce losslessimages, but for a large variety of images, some of the images may notbe compressed with complete efficiency.

Page generated in 0.0661 seconds