• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 63
  • 63
  • 63
  • 36
  • 26
  • 19
  • 17
  • 16
  • 15
  • 14
  • 12
  • 10
  • 9
  • 9
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

RADIX 95n: Binary-to-Text Data Conversion

Jones, Greg, 1963-2017. 08 1900 (has links)
This paper presents Radix 95n, a binary to text data conversion algorithm. Radix 95n (base 95) is a variable length encoding scheme that offers slightly better efficiency than is available with conventional fixed length encoding procedures. Radix 95n advances previous techniques by allowing a greater pool of 7-bit combinations to be made available for 8-bit data translation. Since 8-bit data (i.e. binary files) can prove to be difficult to transfer over 7-bit networks, the Radix 95n conversion technique provides a way to convert data such as compiled programs or graphic images to printable ASCII characters and allows for their transfer over 7-bit networks.
52

A heuristic method for reducing message redundancy in a file transfer environment

Bodwell, William Robert January 1976 (has links)
Intercomputer communications involves the transfer of information between intelligent hosts. Since communication costs are almost proportional to the amount of data transferred, the processing capability of the respective hosts might advantageously be applied through pre-processing and post-processing of data to reduce redundancy. The major emphasis of this research is development of the Substitution Method which minimizes data transfer between hosts required to reconstruct user JCL files, Fortran source files, and data files. The technique requires that a set of user files for each category of files be examined to determine the frequency distribution of symbols, fixed strings, and repeated symbol strings to determine symbol and structural redundancy. Information gathered during the examination of these files when combined with the user created Source Language Syntax Table generate Encoding/Decoding Tables which are used to reduce both symbol and structural redundancy. The Encoding/Decoding Tables allow frequently encountered strings to be represented by only one or two symbols through the utilization of table shift symbols. The table shift symbols allow less frequently encountered symbols of the original alphabet to be represented as an entry in a Secondary Encoding/Decoding Table. A technique is described which enables a programmer to easily modify his Fortran program such that he can take advantage of the Substitution Method's ability to compress data files by removing both informational and structural redundancy. Each user file requested to be transferred is preprocessed at cost, C[prep], to reduce data (both symbol and structural redundancy) which need not be transferred for faithful reproduction of the file. The file is transferred over a noiseless channel at cost, C[ptran]. The channel consists of presently available or proposed services of the common-carriers and specialized common-carriers. The received file is post-processed to reconstruct the original source file at cost, C[post]. The costs associated with pre-processing, transferring, and post-processing are compared with the cost, C[otran], of transferring the entire file in its original form. / Ph. D.
53

High performance signal coding employing vector quantization in multiple nonorthogonal domains with application to speech

Krishnan, Venkatesh 01 July 2001 (has links)
No description available.
54

Delay sensitive delivery of rich images over WLAN in telemedicine applications

Sankara Krishnan, Shivaranjani. January 2009 (has links)
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Jayant, Nikil; Committee Member: Altunbasak, Yucel; Committee Member: Sivakumar, Raghupathy. Part of the SMARTech Electronic Thesis and Dissertation Collection.
55

Image compression system for a 3u cubesat

Nzeugaing, Gutembert Nganpet January 2013 (has links)
Thesis submitted in partial fulfilment of the requirements for the degree of Master of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology 2013 / Earth observation satellites utilise sensors or cameras to capture data or images that are relayed to the ground station(s). The ZACUBE-02 CubeSat currently in development at the French South African Institute of Technology (F’SATI) contains a high resolution 5 megapixel on-board camera. The purpose of the camera is to capture images of Earth and relay them to the ground station once communication is established. The captured images, which can amount to a large volume of data, have to be stored on-board as the CubeSat awaits the next cycle of transmission to the ground station. This mode of operation introduces a number of problems, as the CubeSat has limited storage and memory capacity and is not able to store large amounts of data. This, together with the limitation of the downlink capacity, has set the need for the design and development of an image compression system suitable for the CubeSat environment. Image compression focuses on reducing the size of images to be stored as well as reducing the size of the images to be transmitted to the ground station. The purpose of the study is to propose a compression system to be implemented on ZACUBE-02. An intensive study of current, proposed and implemented compression methods, algorithms and techniques as well as the CubeSat specification, served as input for defining the requirements for such a system. The proposed design is a combination of image segmentation, image linearization and image entropy coding (run-length coding). This combination technique is implemented in order to achieve lossless image compression. For the proposed design, a compression ratio of 10:1 was obtained without negatively affecting image quality.The on-board storage memory constraints, the power constraints and the bandwidth constraints are met with the implementation of the proposed design, resulting in the downlink transmission time being minimised. Within the study a number of objectives were met in order to design, implement and test the compression system. These included a detailed study of image compression techniques; a look into techniques for improving the compression ratio; and a study of industrial hardware components suitable for the space environment. Keywords: CubeSat, hardware, compression, satellite image compression, Gumstix Overo Water, ZACUBE-02.
56

Error resilience for video coding services over packet-based networks

Zhang, Jian, Electrical Engineering, Australian Defence Force Academy, UNSW January 1999 (has links)
Error resilience is an important issue when coded video data is transmitted over wired and wireless networks. Errors can be introduced by network congestion, mis-routing and channel noise. These transmission errors can result in bit errors being introduced into the transmitted data or packets of data being completely lost. Consequently, the quality of the decoded video is degraded significantly. This thesis describes new techniques for minimising this degradation. To verify video error resilience tools, it is first necessary to consider the methods used to carry out experimental measurements. For most audio-visual services, streams of both audio and video data need to be simultaneously transmitted on a single channel. The inclusion of the impact of multiplexing schemes, such as MPEG 2 Systems, in error resilience studies is also an important consideration. It is shown that error resilience measurements including the effect of the Systems Layer differ significantly from those based only on the Video Layer. Two major issues of error resilience are investigated within this thesis. They are resynchronisation after error detection and error concealment. Results for resynchronisation using small slices, adaptive slice sizes and macroblock resynchronisation schemes are provided. These measurements show that the macroblock resynchronisation scheme achieves the best performance although it is not included in MPEG2 standard. The performance of the adaptive slice size scheme, however, is similar to that of the macroblock resynchronisation scheme. This approach is compatible with the MPEG 2 standard. The most important contribution of this thesis is a new concealment technique, namely, Decoder Motion Vector Estimation (DMVE). The decoded video quality can be improved significantly with this technique. Basically, this technique utilises the temporal redundancy between the current and the previous frames, and the correlation between lost macroblocks and their surrounding pixels. Therefore, motion estimation can be applied again to search in the previous picture for a match to those lost macroblocks. The process is similar to that the encoder performs, but it is in the decoder. The integration of techniques such as DMVE with small slices, or adaptive slice sizes or macroblock resynchronisation is also evaluated. This provides an overview of the performance produced by individual techniques compared to the combined techniques. Results show that high performance can be achieved by integrating DMVE with an effective resynchronisation scheme, even at a high cell loss rates. The results of this thesis demonstrate clearly that the MPEG 2 standard is capable of providing a high level of error resilience, even in the presence of high loss. The key to this performance is appropriate tuning of encoders and effective concealment in decoders.
57

Classification using residual vector quantization

Ali Khan, Syed Irteza 13 January 2014 (has links)
Residual vector quantization (RVQ) is a 1-nearest neighbor (1-NN) type of technique. RVQ is a multi-stage implementation of regular vector quantization. An input is successively quantized to the nearest codevector in each stage codebook. In classification, nearest neighbor techniques are very attractive since these techniques very accurately model the ideal Bayes class boundaries. However, nearest neighbor classification techniques require a large size of representative dataset. Since in such techniques a test input is assigned a class membership after an exhaustive search the entire training set, a reasonably large training set can make the implementation cost of the nearest neighbor classifier unfeasibly costly. Although, the k-d tree structure offers a far more efficient implementation of 1-NN search, however, the cost of storing the data points can become prohibitive, especially in higher dimensionality. RVQ also offers a nice solution to a cost-effective implementation of 1-NN-based classification. Because of the direct-sum structure of the RVQ codebook, the memory and computational of cost 1-NN-based system is greatly reduced. Although, as compared to an equivalent 1-NN system, the multi-stage implementation of the RVQ codebook compromises the accuracy of the class boundaries, yet the classification error has been empirically shown to be within 3% to 4% of the performance of an equivalent 1-NN-based classifier.
58

Delay sensitive delivery of rich images over WLAN in telemedicine applications

Sankara Krishnan, Shivaranjani 27 May 2009 (has links)
Transmission of medical images, that mandate lossless transmission of content over WLANs, presents a great challenge. The large size of these images coupled with the low acceptance of traditional image compression techniques within the medical community compounds the problem even more. These factors are of enormous significance in a hospital setting in the context of real-time image collaboration. However, recent advances in medical image compression techniques such as diagnostically lossless compression methodology, has made the solution to this difficult problem feasible. The growing popularity of high speed wireless LAN in enterprise applications and the introduction of the new 802.11n draft standard have made this problem pertinent. The thesis makes recommendations on the degree of compression to be performed for specific instances of image communication applications based on the image size and the underlying network devices and their topology. During our analysis, it was found that for most cases, only a portion of the image; typically the region of interest of the image will be able to meet the time deadline requirement. This dictates a need for adaptive method for maximizing the percentage of the image delivered to the receiver within the deadline. The problem of maximizing delivery of regions of interest of image data within the deadline has been effectively modeled as a multi-commodity flow problem in this work. Though this model provides an optimal solution to the problem, it is NP hard in computational complexity and hence cannot be implemented in dynamic networks. An approximation algorithm that uses greedy approach to flow allocation is proposed to cater to the connection requests in real time. While implementing integer programming model is not feasible due to time constraints, the heuristic can be used to provide a near-optimal solution for the problem of maximizing the reliable delivery of regions of interest of medical images within delay deadlines. This scenario may typically be expected when new connection requests are placed after the initial flow allocations have been made.
59

A study of image compression techniques, with specific focus on weighted finite automata

Muller, Rikus 12 1900 (has links)
Thesis (MSc (Mathematical Sciences)--University of Stellenbosch, 2005. / Image compression using weighted finite automata (WFA) is studied and implemented in Matlab. Other more prominent image compression techniques, namely JPEG, vector quantization, EZW wavelet image compression and fractal image compression are also presented. The performance of WFA image compression is then compared to those of some of the abovementioned techniques.
60

Uma proposta de estimação de movimento para o codificador de vídeo Dirac / A proposal of motion estimation for Dirac video codec

Araujo, André Filgueiras de 16 August 2018 (has links)
Orientador: Yuzo Iano / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-16T03:46:01Z (GMT). No. of bitstreams: 1 Araujo_AndreFilgueirasde_M.pdf: 3583920 bytes, checksum: afbfc9cf561651fe74a6a3d075474fc8 (MD5) Previous issue date: 2010 / Resumo: Este trabalho tem como objetivo principal a elaboração de um novo algoritmo responsável por tornar mais eficiente a estimação de movimento do codec Dirac. A estimação de movimento é uma etapa crítica à codificação de vídeo, na qual se encontra a maior parte do seu processamento. O codec Dirac, recentemente lançado, tem como base técnicas diferentes das habitualmente utilizadas nos codecs mais comuns (como os da linha MPEG). O Dirac objetiva alcançar eficiência comparável aos melhores codecs da atualidade (notadamente o H.264/AVC). Desta forma, este trabalho apresenta inicialmente estudos comparativos visando à avaliação de métodos de estado da arte de estimação de movimento e do codec Dirac, estudos que fornecem a base de conhecimento para o algoritmo que é proposto na sequência. A proposta consiste no algoritmo Modified Hierarchical Enhanced Adaptive Rood Pattern Search (MHEARPS). Este apresenta desempenho superior aos outros algoritmos de relevância em todos os casos analisados, provendo em média complexidade 79% menor mantendo a qualidade de reconstrução. / Abstract: The main purpose of this work is to design a new algorithm which enhance motion estimation in Dirac video codec. Motion estimation is a critical stage in video coding, in which most of the processing lies. Dirac codec, recently released, is based on techniques different from the usually employed (as in MPEG-based codecs). Dirac video codec aims at achieving efficiency comparable to the best codecs (such as H.264/AVC). This work initially presents comparative studies of state-of-the-art motion estimation techniques and Dirac codec which support the conception of the algorithm which is proposed in the sequel. The proposal consists in the algorithm Modified Hierarchical Enhaced Adaptive Rood Pattern Search (MHEARPS). This presents superior performance when compared to other relevant algorithms in every analysed case, providing on average 79% less computations with similar video reconstruction quality. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica

Page generated in 0.2115 seconds