• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1345
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3040
  • 532
  • 464
  • 416
  • 409
  • 358
  • 327
  • 276
  • 264
  • 222
  • 219
  • 201
  • 169
  • 161
  • 157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Comparison of lossy and lossless compression algorithms for time series data in the Internet of Vehicles / Jämförelse av destruktiva och icke-förstörande komprimeringsalgorithmer för tidsseriedata inom fordonens internet

Hughes, Joseph January 2023 (has links)
As automotive development advances, connectivity features are continually added to vehicles that, in conjunction, form an Internet of Vehicles. For numerous reasons, it is vital for vehicle manufacturers to collect telemetry from their fleets. However, the volume of the generated data is too immense to feasibly be transmitted to a server due to CPU and memory limitations of embedded hardware and the monetary cost of cellular network usage. The purpose of this thesis is thus to investigate how these issues can be alleviated by the use of real-time compression of time series data before off-board transmission. A hybrid approach is proposed that results in fast and effective performance on a variety of time series exhibiting varying numerical data features, all while limiting the maximum reconstruction error to a user-specified absolute value. We first perform a literature review to identify state of the art compression algorithms for time series compression that run online and provide max-error guarantees. We then choose a subset of lossless and lossy algorithms that are implemented and benchmarked with regards to their compression ratio, resource usage, and reconstruction error when used on time series that exhibit a variety of data features. Finally, we ask whether we are able to run a lossy and lossless algorithm in succession in order to further increase the compression ratio. The literature review identifies a diverse range of compression algorithms. Out of these, the algorithms Poor Man's Compression - MidRange (PMC-MR) and Swing filter are selected as lossy algorithms, and Run-length Binary Encoding (RLBE) and Gorilla are selected as lossless algorithms. The experiments yield positive results for the lossy algorithms, which excel on different data sets. These are able to achieve compression ratios between 22.0% and 99.5%, depending on the data set, while limiting the max-error to 1%. In contrast, Gorilla achieves compression ratios between 66.6% and 83.7%, outperforming RLBE in nearly all aspects. Moreover, we conclude that there is a strictly positive improvement to the compression ratio when losslessly compressing the result of lossily compressed data. When combining either PMC-MR or Swing filter with Gorilla, we achieve compression ratios between 83.1% and 99.6% across a variety of time series with a maximum error for any given data point of 1%.
272

PERFORMANCE AND COMPLEXITY CO-EVALUATIONS OF MPEG4-ALS COMPRESSION STANDARD FOR LOW-LATENCY MUSIC COMPRESSION

Matthew, Isaac Kevin 26 September 2008 (has links)
No description available.
273

Ultra High Compression For Weather Radar Reflectivity Data

Makkapati, Vishnu Vardhan 17 November 2006 (has links)
Honeywell Technology Solutions Lab, India / Weather is a major contributing factor in aviation accidents, incidents and delays. Doppler weather radar has emerged as a potent tool to observe weather. Aircraft carry onboard radars but their range and angular resolution are limited. Networks of ground-based weather radars provide extensive coverage of weather over large geographic regions. It would be helpful if these data can be transmitted to the pilot. However, these data are highly voluminous and the bandwidth of the ground-air communication links is limited and expensive. Hence, these data have to be compressed to an extent where they are suitable for transmission over low-bandwidth links. Several methods have been developed to compress pictorial data. General-purpose schemes do not take into account the nature of data and hence do not yield high compression ratios. A scheme for extreme compression of weather radar data is developed in this thesis that does not significantly degrade the meteorological information contained in these data. The method is based on contour encoding. It approximates a contour by a set of systematically chosen ‘control points’ that preserve its fine structure up to a certain level. The contours may be obtained using a thresholding process based on NWS or custom reflectivity levels. This process may result in region and hole contours, enclosing `high' or `low' areas, which may be nested. A tag bit is used to label region and hole contours. The control point extraction method first obtains a smoothed reference contour by averaging the original contour. Then the points on the original contour with maximum deviation from the smoothed contour between the crossings of these contours are identified and are designated as control points. Additional control points are added midway between the control point and the crossing points on either side of it, if the length of the segment between the crossing points exceeds a certain length. The control points, referenced with respect to the top-left corner of each contour for compact quantification, are transmitted to the receiving end. The contour is retrieved from the control points at the receiving end using spline interpolation. The region and hole contours are identified using the tag bit. The pixels between the region and hole contours at a given threshold level are filled using the color corresponding to it. This method is repeated till all the contours for a given threshold level are exhausted, and the process is carried out for all other thresholds, thereby resulting in a composite picture of the reconstructed field. Extensive studies have been conducted by using metrics such as compression ratio, fidelity of reconstruction and visual perception. In particular the effect of the smoothing factor, the choice of the degree of spline interpolation and the choice of thresholds are studied. It has been shown that a smoothing percentage of about 10% is optimal for most data. A degree 2 of spline interpolation is found to be best suited for smooth contour reconstruction. Augmenting NWS thresholds has resulted in improved visual perception, but at the expense of a decrease in the compression ratio. Two enhancements to the basic method that include adjustments to the control points to achieve better reconstruction and bit manipulations on the control points to obtain higher compression are proposed. The spline interpolation inherently tends to move the reconstructed contour away from the control points. This has been somewhat compensated by stretching the control points away from the smoothed reference contour. The amount and direction of stretch are optimized with respect to actual data fields to yield better reconstruction. In the bit manipulation study, the effects of discarding the least significant bits of the control point addresses are analyzed in detail. Simple bit truncation introduces a bias in the contour description and reconstruction, which is removed to a great extent by employing a bias compensation mechanism. The results obtained are compared with other methods devised for encoding weather radar contours.
274

Graph compression using graph grammars

Peternek, Fabian Hans Adolf January 2018 (has links)
This thesis presents work done on compressed graph representations via hyperedge replacement grammars. It comprises two main parts. Firstly the RePair compression scheme, known for strings and trees, is generalized to graphs using graph grammars. Given an object, the scheme produces a small context-free grammar generating the object (called a “straight-line grammar”). The theoretical foundations of this generalization are presented, followed by a description of a prototype implementation. This implementation is then evaluated on real-world and synthetic graphs. The experiments show that several graphs can be compressed stronger by the new method, than by current state-of-the-art approaches. The second part considers algorithmic questions of straight-line graph grammars. Two algorithms are presented to traverse the graph represented by such a grammar. Both algorithms have advantages and disadvantages: the first one works with any grammar but its runtime per traversal step is dependent on the input grammar. The second algorithm only needs constant time per traversal step, but works for a restricted class of grammars and requires quadratic preprocessing time and space. Finally speed-up algorithms are considered. These are algorithms that can decide specific problems in time depending only on the size of the compressed representation, and might thus be faster than a traditional algorithm would on the decompressed structure. The idea of such algorithms is to reuse computation already done for the rules of the grammar. The possible speed-ups achieved this way is proportional to the compression ratio of the grammar. The main results here are a method to answer “regular path queries”, and to decide whether two grammars generate isomorphic trees.
275

Lossless Message Compression

Hansson, Erik, Karlsson, Stefan January 2013 (has links)
In this thesis we investigated whether using compression when sending inter-process communication (IPC) messages can be beneficial or not. A literature study on lossless compression resulted in a compilation of algorithms and techniques. Using this compilation, the algorithms LZO, LZFX, LZW, LZMA, bzip2 and LZ4 were selected to be integrated into LINX as an extra layer to support lossless message compression. The testing involved sending messages with real telecom data between two nodes on a dedicated network, with different network configurations and message sizes. To calculate the effective throughput for each algorithm, the round-trip time was measured. We concluded that the fastest algorithms, i.e. LZ4, LZO and LZFX, were most efficient in our tests. / I detta examensarbete har vi undersökt huruvida komprimering av meddelanden för interprocesskommunikation (IPC) kan vara fördelaktigt. En litteraturstudie om förlustfri komprimering resulterade i en sammanställning av algoritmer och tekniker. Från den här sammanställningen utsågs algoritmerna LZO, LZFX, LZW, LZMA, bzip2 och LZ4 för integrering i LINX som ett extra lager för att stödja komprimering av meddelanden. Algoritmerna testades genom att skicka meddelanden innehållande riktig telekom-data mellan två noder på ett dedikerat nätverk. Detta gjordes med olika nätverksinställningar samt storlekar på meddelandena. Den effektiva nätverksgenomströmningen räknades ut för varje algoritm genom att mäta omloppstiden. Resultatet visade att de snabbaste algoritmerna, alltså LZ4, LZO och LZFX, var effektivast i våra tester.
276

Lossless medical image compression using integer transforms and predictive coding technique

Neela, Divya January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / D. V. Satish Chandra / The future of healthcare delivery systems and telemedical applications will undergo a radical change due to the developments in wearable technologies, medical sensors, mobile computing and communication techniques. E-health was born with the integration of networks and telecommunications when dealing with applications of collecting, sorting and transferring medical data from distant locations for performing remote medical collaborations and diagnosis. Healthcare systems in recent years rely on images acquired in two dimensional (2D) domain in the case of still images, or three dimensional (3D) domain for volumetric images or video sequences. Images are acquired with many modalities including X-ray, positron emission tomography (PET), magnetic resonance imaging (MRI), computed axial tomography (CAT) and ultrasound. Medical information is either in multidimensional or multi resolution form, this creates enormous amount of data. Efficient storage, retrieval, management and transmission of this voluminous data is extremely complex. One of the solutions to reduce this complex problem is to compress the medical data losslessly so that the diagnostics capabilities are not compromised. This report proposes techniques that combine integer transforms and predictive coding to enhance the performance of lossless compression. The performance of the proposed techniques is evaluated using compression measures such as entropy and scaled entropy.
277

THE VIDEO SYSTEM OF LAUNCH VEHICLE

Xiangwu, Gao, Juan, Lin, Zhengguang, He 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / XX launch vehicle has been flying onboard video system which includes video cameras, data compression devices and channel switch device for the second Chinese spaceflight. The camera is a PAL analog camera that been sampled and compressed by compression device. The compressed digital video data is combined with telemetry data into the telemetry radio channel. Lighting is provided by sunlight, or a light has been equipped when sunlight is unavailable. IRIG-B timing is used to correlate the video with other vehicle telemetry. The video system’s influences to the vehicle flight have been decreased to minimum.
278

Analysis of Optimized Design Tradeoffs in Application of Wavelet Algorithms to Video Compression

Wanis, Paul, Fairbanks, John S. 10 1900 (has links)
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California / Because all video compression schemes introduce artifacts into the compressed video images, degradation occurs. These artifacts, generated by a wavelet-based compression scheme, will vary with the compression ratio and input imagery, but do show some consistent patterns across applications. There are a number of design trade-offs that can be made to mitigate the effect of these artifacts. By understanding the artifacts introduced by video compression and being able to anticipate the amount of image degradation, the video compression can be configured in a manner optimal to the application under consideration in telemetry.
279

Incrustation d'un logo dans un ficher vidéo codé avec le standard MPEG-2

Keroulas, Patrick January 2009 (has links)
Ce mémoire constitue l'aboutissement du projet de recherche de Patrick Keroulas et aborde la notion de compression vidéo, domaine en pleine ébullition avec la démocratisation de l'équipement vidéo et des réseaux de télécommunication. La question initiale est de savoir s'il est possible de modifier le contenu de l'image directement dans un flux binaire provenant d'une séquence vidéo compressée. Un tel dispositif permettrait d'ajouter des modifications en n'importe quel point d'un réseau en évitant le décodage et recodage du flux de données, ces deux processus étant très coûteux en termes de calcul. Brièvement présentés dans la première partie, plusieurs travaux ont déjà proposé une gamme assez large de méthodes de filtrage, de débruitage, de redimensionnement de l'image, etc. Toutes les publications rencontrées à ce sujet se concentrent sur la transposition des traitements de l'image du domaine spatial vers le domaine fréquentiel. Il a été convenu de centrer la problématique sur une application potentiellement exploitable dans le domaine de la télédiffusion. Il s'agit d'incruster un logo ajustable en position et en opacité dans un fichier vidéo codé avec la norme MPEG-2, encore couramment utilisée. La transformée appliquée par cet algorithme de compression est la DCT (Discrete Cosine Transform). Un article publié en 1995 traitant de la composition vidéo en général est plus détaillé car il sert de base à cette étude. Certains outils proposés qui reposent sur la linéarité et l'orthogonalité de la transformée seront repris dans le cadre de ce projet, mais la démarche proposée pour résoudre les problèmes temporels est différente. Ensuite, les éléments essentiels de la norme MPEG-2 sont présentés pour en comprendre les mécanismes et également pour exposer la structure d'un fichier codé car, en pratique, ce serait la seule donnée accessible. Le quatrième chapitre de l'étude présente la solution technique mise en oeuvre via un article soumis à IEEE Transactions on Broadcasting. C'est dans cette partie que toutes les subtilités liées au codage sont traitées : la structure en blocs de pixel, la prédiction spatiale, la compensation de mouvement au demi-pixel près, la nécessité ou non de la quantification inverse. À la vue des résultats satisfaisants, la discussion finale porte sur la limite du système : le compromis entre son efficacité, ses degrés de liberté et le degré de décodage du flux.
280

SCALABLE LOW COMPLEXITY CODER FOR HIGH RESOLUTION AIRBORNE VIDEO

Lalgudi, Hariharan G., Marcellin, Michael W., Bilgin, Ali, Nadar, Mariappan S. 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Real-time transmission of airborne images to a ground station is highly desirable in many telemetering applications. Such transmission is often through an error prone, time varying wireless channel, possibly under jamming conditions. Hence, a fast, efficient, scalable, and error resilient image compression scheme is vital to realize the full potential of airborne reconnaisance. JPEG2000, the current international standard for image compression, offers most of these features. However, the computational complexity of JPEG2000 limits its use in some applications. Thus, we present a scalable low complexity coder (SLCC) that possesses many desirable features of JPEG2000, yet having high throughput.

Page generated in 0.1012 seconds