• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 182
  • 35
  • 34
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 322
  • 322
  • 144
  • 120
  • 86
  • 66
  • 65
  • 58
  • 52
  • 42
  • 37
  • 37
  • 35
  • 28
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Jämförelse av datakomprimeringsalgoritmer för sensordata i motorstyrenheter / Comparison of data compression algorithms for sensordata in engine control units

Möller, Malin, Persson, Dominique January 2023 (has links)
Begränsad processor- och minneskapacitet är en stor utmaning för loggning avsensorsignaler i motorstyrenheter. För att kunna lagra större mängder data i dessakan komprimering användas. För att kunna implementera komprimering imotorstyrenheter krävs det att algoritmerna klarar de begränsningar som finnsgällande processorkapaciteten och ändå kan producera en godtagbarkomprimeringsgrad.Denna avhandling jämför komprimeringsalgoritmer och undersöker vilken ellervilka algoritmer som är bäst lämpade för detta ändamål. Detta i syfte att förbättraloggning och därmed effektivisera felsökning. Detta gjordes genom att utveckla ettsystem som kör olika komprimeringsalgoritmer på samplad sensordata frånmotorstyrenheter och beräknar komprimeringstid och komprimeringsgrad.Resultaten visade att delta-på-delta-komprimering presterade bättre än xorkomprimering för dessa data. Delta-på-delta presterade betydligt bättre gällandekomprimeringsgrad medan skillnaderna i komprimeringstid mellan algoritmernavar marginella. Delta-på-delta-komprimering bedöms ha god potential förimplementering i loggningssystem för motorstyrenheter. Algoritmen bedöms somväl lämpad för loggning av mindre tidsserier vid viktiga händelser, för merkontinuerlig loggning föreslås fortsatta studier för att undersöka hurkomprimeringsgraden kan förbättras ytterligare. / Limited processor and memory capacity is a major challenge for logging sensorsignals in engine control units. In order to be able to store larger amounts of data,compression can be used. To successfully implement compression algorithms inmotor control units, it is essential that the algorithms can effectively handle thelimitations associated with processor capacity while achieving an acceptable level ofcompression.This thesis compares compression algorithms on sensor data from motor controlunits in order to investigate which algorithm(s) are best suited to implement forthis application. The work aims to improve the possibilities of logging sensor dataand thus make the troubleshooting of the engine control units more efficient. Thiswas done by developing a system that performs compression on sampled sensorsignals and calculates the compression time and ratio.The results indicated that delta-of-delta compression performed better than xorcompression for the tested data sets. Delta-of-delta had a significantly bettercompression ratio while the differences between the algorithms regardingcompression time were minor. Delta-of-delta compression was judged to have goodpotential for implementation in engine control unit logging systems. The algorithmis deemed to be well suited for logging smaller time series during important events.For continuous logging of larger time series, further research is suggested in orderto investigate the possibility of improving the compression ratio further.
242

Integer Compression in NVRAM-centric Data Stores: Comparative Experimental Analysis to DRAM

Zarubin, Mikhail, Damme, Patrick, Kissinger, Thomas, Habich, Dirk, Lehner, Wolfgang, Willhalm, Thomas 01 September 2022 (has links)
Lightweight integer compression algorithms play an important role in in-memory database systems to tackle the growing gap between processor speed and main memory bandwidth. Thus, there is a large number of algorithms to choose from, while different algorithms are tailored to different data characteristics. As we show in this paper, with the availability of byte-addressable non-volatile random-access memory (NVRAM), a novel type of main memory with specific characteristics increases the overall complexity in this domain. In particular, we provide a detailed evaluation of state-of-the-art lightweight integer compression schemes and database operations on NVRAM and compare it with DRAM. Furthermore, we reason about possible deployments of middle- and heavyweight approaches for better adaptation to NVRAM characteristics. Finally, we investigate a combined approach where both volatile and non-volatile memories are used in a cooperative fashion that is likely to be the case for hybrid and NVRAM-centric database systems.
243

Compression of Endpoint Identifiers in Delay Tolerant Networking

Young, David A. January 2013 (has links)
No description available.
244

Signal Compression Methods for a Wear Debris Sensor

Bhattaram, Sneha 15 September 2014 (has links)
No description available.
245

[en] THE BURROWS-WHEELER TRANSFORM AND ITS APPLICATIONS TO COMPRESSION / [pt] A TRANSFORMADA DE BURROWS-WHEELER E SUA APLICAÇÃO À COMPRESSÃO

JULIO CESAR DUARTE 23 July 2003 (has links)
[pt] A transformada de Burrows-Wheeler, baseada na ordenação de contextos, transforma uma seqüência de caracteres em uma nova seqüência mais facilmente comprimida por um algoritmo que explore grandes seqüências de repetições de caracteres. Aliado a recodificação do MoverParaFrente e seguida de uma codificação para os inteiros gerados, eles formam uma nova família de compressores, que possuem excelentes taxas de compressão, com boas performances nos tempos de compressão e descompressão. Este trabalho examina detalhadamente essa transformada, suas variações e algumas alternativas para os algoritmos utilizados em conjunto com ela. Como resultado final, apresentamos uma combinação de estratégias que produz taxas de compressão para texto melhores do que as oferecidas pelas implementações até aqui disponíveis. / [en] The Burrows-Wheeler Transform, based on sorting of contexts, transforms a sequence of characters into a new sequence easier to compress by an algorithm that exploits long sequences of repeted characters. Combined with the coding provided by the MoveToFront Algorithm and followed by a codification for the generated integers, they propose a new family of compressors, that achieve excellent compression rates with good time performances in compression and decompression. This work examines detaildedly this transform, its variations and some alternatives for the algorithms used together with it. As a final result, we present a combination of strategies that producescompression rates for text data that are better than those offered by implementations available nowadays.
246

Desenvolvimento de um tomógrafo de ressonância magnética: integração e otimização. / Development of a magnetic resonance tomograph: integration and improvement.

Martins, Mateus Jose 07 February 1995 (has links)
O presente trabalho descreve o desenvolvimento de um sistema de tomografia por Ressonância Magnética, para uso em diagnósticos médicos. Ele foi baseado em subsistemas disponíveis comercialmente, tais como os utilizados em equipamentos de imagens de RM comerciais. As principais contribuições deste projeto foram: o desenvolvimento de circuitos eletrônicos complementares e \"software\", necessários para tomar o \"hardware\" adquirido em um Tomógrafo de Ressonância Magnética para diagnósticos médicos. Isso não inclui somente o projeto de complexos programas necessários para gerar os pulsos de RF, formas de onda dos gradientes, sistema de aquisição de dados e as seqüências de pulsos necessárias para o completo sincronismo, mas também o desenvolvimento de uma interface amigável para realizar: a entrada de informações do paciente; a seleção das técnicas de imagem e o \"software\" interativo para visualização e seleção de planos. Um novo algoritmo de compressão de dados, para reduzir o armazenamento necessário de dados de imagens de RM, sem perda de informação foi também apresentado e implementado. Uma comparação com outras implementações de compressão de uso geral foi apresentada para mostrar uma performance superior na taxa de compressão e tempo de execução. / The present work describes the development of a Magnetic Resonance Tomography system to be used for medical diagnostics. It is based on commercially available subsystems such as used as in commercial MRI equipments. The main contributions to the project were: the development of fill up electronics and software needed to turn the acquired hardware into a MR Tomography for medical diagnostics. This includes not only the design of complex software needed to generate the used RF pulses, gradient waveforms, data acquisition system and the desired pulse sequences to synchronize all, but also the development of a user friendly interface to do: the entrance of patient\'s information; the selection of MR imaging techniques and interactive software for image viewing and planes selection. A new data compression algorithm to reduce the storage requirement of raw MR image data without information losses is also presented and implemented. A comparison with others general purpose compression implementations is presented to show the superior performance in the compression rate and execution time.
247

Error resilience for video coding services over packet-based networks

Zhang, Jian, Electrical Engineering, Australian Defence Force Academy, UNSW January 1999 (has links)
Error resilience is an important issue when coded video data is transmitted over wired and wireless networks. Errors can be introduced by network congestion, mis-routing and channel noise. These transmission errors can result in bit errors being introduced into the transmitted data or packets of data being completely lost. Consequently, the quality of the decoded video is degraded significantly. This thesis describes new techniques for minimising this degradation. To verify video error resilience tools, it is first necessary to consider the methods used to carry out experimental measurements. For most audio-visual services, streams of both audio and video data need to be simultaneously transmitted on a single channel. The inclusion of the impact of multiplexing schemes, such as MPEG 2 Systems, in error resilience studies is also an important consideration. It is shown that error resilience measurements including the effect of the Systems Layer differ significantly from those based only on the Video Layer. Two major issues of error resilience are investigated within this thesis. They are resynchronisation after error detection and error concealment. Results for resynchronisation using small slices, adaptive slice sizes and macroblock resynchronisation schemes are provided. These measurements show that the macroblock resynchronisation scheme achieves the best performance although it is not included in MPEG2 standard. The performance of the adaptive slice size scheme, however, is similar to that of the macroblock resynchronisation scheme. This approach is compatible with the MPEG 2 standard. The most important contribution of this thesis is a new concealment technique, namely, Decoder Motion Vector Estimation (DMVE). The decoded video quality can be improved significantly with this technique. Basically, this technique utilises the temporal redundancy between the current and the previous frames, and the correlation between lost macroblocks and their surrounding pixels. Therefore, motion estimation can be applied again to search in the previous picture for a match to those lost macroblocks. The process is similar to that the encoder performs, but it is in the decoder. The integration of techniques such as DMVE with small slices, or adaptive slice sizes or macroblock resynchronisation is also evaluated. This provides an overview of the performance produced by individual techniques compared to the combined techniques. Results show that high performance can be achieved by integrating DMVE with an effective resynchronisation scheme, even at a high cell loss rates. The results of this thesis demonstrate clearly that the MPEG 2 standard is capable of providing a high level of error resilience, even in the presence of high loss. The key to this performance is appropriate tuning of encoders and effective concealment in decoders.
248

Deterministisk Komprimering/Dekomprimering av Testvektorer med Hjälp av en Inbyggd Processor och Faxkodning / Deterministic Test Vector Compression/Decompression Using an Embedded Processor and Facsimile Coding

Persson, Jon January 2005 (has links)
<p>Modern semiconductor design methods makes it possible to design increasingly complex system-on-a-chips (SOCs). Testing such SOCs becomes highly expensive due to the rapidly increasing test data volumes with longer test times as a result. Several approaches exist to compress the test stimuli and where hardware is added for decompression. This master’s thesis presents a test data compression method based on a modified facsimile code. An embedded processor on the SOC is used to decompress and apply the data to the cores of the SOC. The use of already existing hardware reduces the need of additional hardware. </p><p>Test data may be rearranged in some manners which will affect the compression ratio. Several modifications are discussed and tested. To be realistic a decompressing algorithm has to be able to run on a system with limited resources. With an assembler implementation it is shown that the proposed method can be effectively realized in such environments. Experimental results where the proposed method is applied to benchmark circuits show that the method compares well with similar methods. </p><p>A method of including the response vector is also presented. This approach makes it possible to abort a test as soon as an error is discovered, still compressing the data used. To correctly compare the test response with the expected one the data needs to include don’t care bits. The technique uses a mask vector to mark the don’t care bits. The test vector, response vector and mask vector is merged in four different ways to find the most optimal way.</p>
249

Geographic Indexing and Data Management for 3D-Visualisation

Ottoson, Patrik January 2001 (has links)
No description available.
250

Discovering and Tracking Interesting Web Services

Rocco, Daniel J. (Daniel John) 01 December 2004 (has links)
The World Wide Web has become the standard mechanism for information distribution and scientific collaboration on the Internet. This dissertation research explores a suite of techniques for discovering relevant dynamic sources in a specific domain of interest and for managing Web data effectively. We first explore techniques for discovery and automatic classification of dynamic Web sources. Our approach utilizes a service class model of the dynamic Web that allows the characteristics of interesting services to be specified using a service class description. To promote effective Web data management, the Page Digest Web document encoding eliminates tag redundancy and places structure, content, tags, and attributes into separate containers, each of which can be referenced in isolation or in conjunction with the other elements of the document. The Page Digest Sentinel system leverages our unique encoding to provide efficient and scalable change monitoring for arbitrary Web documents through document compartmentalization and semantic change request grouping. Finally, we present XPack, an XML document compression system that uses a containerized view of an XML document to provide both good compression and efficient querying over compressed documents. XPack's queryable XML compression format is general-purpose, does not rely on domain knowledge or particular document structural characteristics for compression, and achieves better query performance than standard query processors using text-based XML. Our research expands the capabilities of existing dynamic Web techniques, providing superior service discovery and classification services, efficient change monitoring of Web information, and compartmentalized document handling. DynaBot is the first system to combine a service class view of the Web with a modular crawling architecture to provide automated service discovery and classification. The Page Digest Web document encoding represents Web documents efficiently by separating the individual characteristics of the document. The Page Digest Sentinel change monitoring system utilizes the Page Digest document encoding for scalable change monitoring through efficient change algorithms and intelligent request grouping. Finally, XPack is the first XML compression system that delivers compression rates similar to existing techniques while supporting better query performance than standard query processors using text-based XML.

Page generated in 0.4548 seconds