• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1345
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3040
  • 532
  • 464
  • 416
  • 409
  • 358
  • 327
  • 276
  • 264
  • 222
  • 219
  • 201
  • 169
  • 161
  • 157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Efficient and perceptual picture coding techniques. / CUHK electronic theses & dissertations collection

January 2009 (has links)
In the first part, some efficient algorithms are proposed to reduce the complexity of H.264 encoder, which is the latest state-of-the-art video coding standard. Intra and Inter mode decision play a vital role in H.264 encoder and can reduce the spatial and temporal redundancy significantly, but the computational cost is also high. Here, a fast Intra mode decision algorithm and a fast Inter mode decision algorithm are proposed. Experimental results show that the proposed algorithms not only save a lot of computational cost, but also maintain coding performance quite well. Moreover, a real time H.264 baseline codec is implemented on mobile device. Based on our real time H.264 codec, an H.264 based mobile video conferencing system is achieved. / The objective of this thesis is to develop some efficient and perceptual image and video coding techniques. Two parts of the work are investigated in this thesis. / The second part of this thesis investigates two kinds of perceptual picture coding techniques. One is the just noticeable distortion (JND) based picture coding. Firstly, a DCT based spatio-temporal JND model is proposed, which is an efficient model to represent the perceptual redundancies existing in images and is consistent with the human visual system (HVS) characteristic. Secondly, the proposed JND model is incorporated into image and video coding to improve the perceptual quality. Based on the JND model, a transparent image coder and a perceptually optimized H.264 video coder are implemented. Another technique is the image compression scheme based on the recent advances in texture synthesis. In this part, an image compression scheme is proposed with the perceptual visual quality as the performance criterion instead of the pixel-wise fidelity. As demonstrated in extensive experiments, the proposed techniques can improve the perceptual quality of picture coding significantly. / Wei Zhenyu. / Adviser: Ngan Ngi. / Source: Dissertation Abstracts International, Volume: 73-01, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 148-154). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
632

Novel error resilient techniques for the robust transport of MPEG-4 video over error-prone networks. / CUHK electronic theses & dissertations collection

January 2004 (has links)
Bo Yan. / "May 2004." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (p. 117-131). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
633

Arbitrary block-size transform video coding. / CUHK electronic theses & dissertations collection

January 2011 (has links)
Besides ABT with higher order transform, a transform based template matching is also investigated. A fast method of template matching, called Fast Walsh Search, is developed. This search method has similar accuracy as exhaustive search but significantly lower computation requirement. / In this thesis, the development of simple but efficient order-16 transforms will be shown. Analysis and comparison with existing order-16 transforms have been carried out. The proposed order-16 transforms were integrated to the existing coding standard reference software individually so as to achieve a new ABT system. In the proposed ABT system, order-4, order-8 and order-16 transforms coexist. The selection of the most appropriate transform is based on the rate-distortion performance of these transforms. A remarkable improvement in coding performance is shown in the experiment results. A significant bit rate reduction can be achieved with our proposed ABT system with both subjective and objective qualities remain unchanged. / Prior knowledge of the coefficient distribution is a key to achieve better coding performance. This is very useful in many areas in coding such as rate control, rate distortion optimization, etc. It is also shown that coefficient distribution of predicted residue is closer to Cauchy distribution rather than traditionally expected Laplace distribution. This can effectively improve the existing processing techniques. / Three kinds of order-l 6 orthogonal DCT-like integer transforms are proposed in this thesis. The first one is the simple integer transform, which is expanded from existing order-8 ICT. The second one is the hybrid integer transform from the Dyadic Weighted Walsh Transform (DWWT). It is shown that it has a better performance than simple integer transform. The last one is a recursive transform. Order-2N transform can be derived from order-N one. It is very close to the DCT. This recursive transform can be implemented in two different ways and they are denoted as LLMICT and CSFICT. They have excellent coding performance. These proposed transforms are investigated and are implemented into the reference software of H.264 and AVS. They are also compared with other order-16 orthogonal integer transform. Experimental results show that the proposed transforms give excellent coding performance and ease to compute. / Transform is a very important coding tool in video coding. It decorrelates the pixel data and removes the redundancy among pixels so as to achieve compression. Traditionally, order-S transform is used in video and image coding. Latest video coding standards, such as H.264/AVC, adopt both order-4 and order-8 transforms. The adaptive use of more than one transforms of different sizes is known as Arbitrary Block-size Transform (ABT). Transforms other than order-4 and order-8 can also be used in ABT. It is expected larger transform size such as order-16 will benefit more in video sequences with higher resolutions such as nap and 1a8ap sequences. As a result, order-16 transform is introduced into ABT system. / Fong, Chi Keung. / Adviser: Wai Kuen Cham. / Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references. / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
634

Three dimensional DCT based video compression.

January 1997 (has links)
by Chan Kwong Wing Raymond. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 115-123). / Acknowledgments --- p.i / Table of Contents --- p.ii-v / List of Tables --- p.vi / List of Figures --- p.vii / Abstract --- p.1 / Chapter Chapter 1 : --- Introduction / Chapter 1.1 --- An Introduction to Video Compression --- p.3 / Chapter 1.2 --- Overview of Problems --- p.4 / Chapter 1.2.1 --- Analog Video and Digital Problems --- p.4 / Chapter 1.2.2 --- Low Bit Rate Application Problems --- p.4 / Chapter 1.2.3 --- Real Time Video Compression Problems --- p.5 / Chapter 1.2.4 --- Source Coding and Channel Coding Problems --- p.6 / Chapter 1.2.5 --- Bit-rate and Quality Problems --- p.7 / Chapter 1.3 --- Organization of the Thesis --- p.7 / Chapter Chapter 2 : --- Background and Related Work / Chapter 2.1 --- Introduction --- p.9 / Chapter 2.1.1 --- Analog Video --- p.9 / Chapter 2.1.2 --- Digital Video --- p.10 / Chapter 2.1.3 --- Color Theory --- p.10 / Chapter 2.2 --- Video Coding --- p.12 / Chapter 2.2.1 --- Predictive Coding --- p.12 / Chapter 2.2.2 --- Vector Quantization --- p.12 / Chapter 2.2.3 --- Subband Coding --- p.13 / Chapter 2.2.4 --- Transform Coding --- p.14 / Chapter 2.2.5 --- Hybrid Coding --- p.14 / Chapter 2.3 --- Transform Coding --- p.15 / Chapter 2.3.1 --- Discrete Cosine Transform --- p.16 / Chapter 2.3.1.1 --- 1-D Fast Algorithms --- p.16 / Chapter 2.3.1.2 --- 2-D Fast Algorithms --- p.17 / Chapter 2.3.1.3 --- Multidimensional DCT Algorithms --- p.17 / Chapter 2.3.2 --- Quantization --- p.18 / Chapter 2.3.3 --- Entropy Coding --- p.18 / Chapter 2.3.3.1 --- Huffman Coding --- p.19 / Chapter 2.3.3.2 --- Arithmetic Coding --- p.19 / Chapter Chapter 3 : --- Existing Compression Scheme / Chapter 3.1 --- Introduction --- p.20 / Chapter 3.2 --- Motion JPEG --- p.20 / Chapter 3.3 --- MPEG --- p.20 / Chapter 3.4 --- H.261 --- p.22 / Chapter 3.5 --- Other Techniques --- p.23 / Chapter 3.5.1 --- Fractals --- p.23 / Chapter 3.5.2 --- Wavelets --- p.23 / Chapter 3.6 --- Proposed Solution --- p.24 / Chapter 3.7 --- Summary --- p.25 / Chapter Chapter 4 : --- Fast 3D-DCT Algorithms / Chapter 4.1 --- Introduction --- p.27 / Chapter 4.1.1 --- Motivation --- p.27 / Chapter 4.1.2 --- Potentials of 3D DCT --- p.28 / Chapter 4.2 --- Three Dimensional Discrete Cosine Transform (3D-DCT) --- p.29 / Chapter 4.2.1 --- Inverse 3D-DCT --- p.29 / Chapter 4.2.2 --- Forward 3D-DCT --- p.30 / Chapter 4.3 --- 3-D FCT (3-D Fast Cosine Transform Algorithm --- p.30 / Chapter 4.3.1 --- Partitioning and Rearrangement of Data Cube --- p.30 / Chapter 4.3.1.1 --- Spatio-temporal Data Cube --- p.30 / Chapter 4.3.1.2 --- Spatio-temporal Transform Domain Cube --- p.31 / Chapter 4.3.1.3 --- Coefficient Matrices --- p.31 / Chapter 4.3.2 --- 3-D Inverse Fast Cosine Transform (3-D IFCT) --- p.32 / Chapter 4.3.2.1 --- Matrix Representations --- p.32 / Chapter 4.3.2.2 --- Simplification of the calculation steps --- p.33 / Chapter 4.3.3 --- 3-D Forward Fast Cosine Transform (3-D FCT) --- p.35 / Chapter 4.3.3.1 --- Decomposition --- p.35 / Chapter 4.3.3.2 --- Reconstruction --- p.36 / Chapter 4.4 --- The Fast Algorithm --- p.36 / Chapter 4.5 --- Example using 4x4x4 IFCT --- p.38 / Chapter 4.6 --- Complexity Comparison --- p.43 / Chapter 4.6.1 --- Complexity of Multiplications --- p.43 / Chapter 4.6.2 --- Complexity of Additions --- p.43 / Chapter 4.7 --- Implementation Issues --- p.44 / Chapter 4.8 --- Summary --- p.46 / Chapter Chapter 5 : --- Quantization / Chapter 5.1 --- Introduction --- p.49 / Chapter 5.2 --- Dynamic Ranges of 3D-DCT Coefficients --- p.49 / Chapter 5.3 --- Distribution of 3D-DCT AC Coefficients --- p.54 / Chapter 5.4 --- Quantization Volume --- p.55 / Chapter 5.4.1 --- Shifted Complement Hyperboloid --- p.55 / Chapter 5.4.2 --- Quantization Volume --- p.58 / Chapter 5.5 --- Scan Order for Quantized 3D-DCT Coefficients --- p.59 / Chapter 5.6 --- Finding Parameter Values --- p.60 / Chapter 5.7 --- Experimental Results from Using the Proposed Quantization Values --- p.65 / Chapter 5.8 --- Summary --- p.66 / Chapter Chapter 6 : --- Entropy Coding / Chapter 6.1 --- Introduction --- p.69 / Chapter 6.1.1 --- Huffman Coding --- p.69 / Chapter 6.1.2 --- Arithmetic Coding --- p.71 / Chapter 6.2 --- Zero Run-Length Encoding --- p.73 / Chapter 6.2.1 --- Variable Length Coding in JPEG --- p.74 / Chapter 6.2.1.1 --- Coding of the DC Coefficients --- p.74 / Chapter 6.2.1.2 --- Coding of the DC Coefficients --- p.75 / Chapter 6.2.2 --- Run-Level Encoding of the Quantized 3D-DCT Coefficients --- p.76 / Chapter 6.3 --- Frequency Analysis of the Run-Length Patterns --- p.76 / Chapter 6.3.1 --- The Frequency Distributions of the DC Coefficients --- p.77 / Chapter 6.3.2 --- The Frequency Distributions of the DC Coefficients --- p.77 / Chapter 6.4 --- Huffman Table Design --- p.84 / Chapter 6.4.1 --- DC Huffman Table --- p.84 / Chapter 6.4.2 --- AC Huffman Table --- p.85 / Chapter 6.5 --- Implementation Issue --- p.85 / Chapter 6.5.1 --- Get Category --- p.85 / Chapter 6.5.2 --- Huffman Encode --- p.86 / Chapter 6.5.3 --- Huffman Decode --- p.86 / Chapter 6.5.4 --- PutBits --- p.88 / Chapter 6.5.5 --- GetBits --- p.90 / Chapter Chapter 7 : --- "Contributions, Concluding Remarks and Future Work" / Chapter 7.1 --- Contributions --- p.92 / Chapter 7.2 --- Concluding Remarks --- p.93 / Chapter 7.2.1 --- The Advantages of 3D DCT codec --- p.94 / Chapter 7.2.2 --- Experimental Results --- p.95 / Chapter 7.1 --- Future Work --- p.95 / Chapter 7.2.1 --- Integer Discrete Cosine Transform Algorithms --- p.95 / Chapter 7.2.2 --- Adaptive Quantization Volume --- p.96 / Chapter 7.2.3 --- Adaptive Huffman Tables --- p.96 / Appendices: / Appendix A : The detailed steps in the simplification of Equation 4.29 --- p.98 / Appendix B : The program Listing of the Fast DCT Algorithms --- p.101 / Appendix C : Tables to Illustrate the Reording of the Quantized Coefficients --- p.110 / Appendix D : Sample Values of the Quantization Volume --- p.111 / Appendix E : A 16-bit VLC table for AC Run-Level Pairs --- p.113 / References --- p.115
635

On design of a scalable video data placement strategy for supporting a load balancing video-on-demand storage server.

January 1997 (has links)
by Kelvin Kwok-wai Law. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 66-68). / Abstract --- p.i / Acknowledgments --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Motivation --- p.2 / Chapter 1.3 --- Scope --- p.3 / Chapter 1.4 --- Dissertation Outline --- p.4 / Chapter 2 --- Background and Related Researches --- p.6 / Chapter 2.1 --- Interactive Services --- p.6 / Chapter 2.2 --- VOD Architecture --- p.7 / Chapter 2.3 --- Video Compression --- p.10 / Chapter 2.3.1 --- DCT Based Compression --- p.11 / Chapter 2.3.2 --- Subband Video Compression --- p.12 / Chapter 2.4 --- Related Research --- p.14 / Chapter 3 --- Multiple Resolutions Video File System --- p.16 / Chapter 3.1 --- Physical Disk Storage System --- p.16 / Chapter 3.2 --- Multi-resolution Video Data Placement Scheme --- p.17 / Chapter 3.3 --- Example of our Video Block Assignment Algorithm --- p.23 / Chapter 3.4 --- An Assignment Algorithm for Homogeneous Video Files --- p.26 / Chapter 4 --- Disk Scheduling and Admission Control --- p.33 / Chapter 4.1 --- Disk Scheduling Algorithm --- p.33 / Chapter 4.2 --- Admission Control --- p.40 / Chapter 5 --- Load Balancing of the Disk System --- p.43 / Chapter 6 --- Buffer Management --- p.49 / Chapter 6.1 --- Buffer Organization --- p.49 / Chapter 6.2 --- Buffer Requirement For Different Video Playback Mode --- p.51 / Chapter 7 --- Conclusions --- p.63 / Bibliography --- p.66
636

Meias de gorgurão como monoterapia no tratamento do linfedema de membros inferiores

Guimarães, Tânia Dias 30 April 2014 (has links)
Submitted by Fabíola Silva (fabiola.silva@famerp.br) on 2016-09-14T17:46:32Z No. of bitstreams: 1 taniadiasguimaraes_dissert.pdf: 2221794 bytes, checksum: 31be5c6999f747328bca96563ba514ba (MD5) / Made available in DSpace on 2016-09-14T17:46:32Z (GMT). No. of bitstreams: 1 taniadiasguimaraes_dissert.pdf: 2221794 bytes, checksum: 31be5c6999f747328bca96563ba514ba (MD5) Previous issue date: 2014-04-30 / Introduction: With difficulties to associate different therapies, a compression mechanism is the best choice of monotherapy in the treatment of lymphedema. Objective: The aim of the current study was to assess over one month the effectiveness of a compression mechanism as monotherapy to reduce the volume of leg lymphedema using a cotton-polyester (grosgrain) stocking.Patients and Method: In 2013, 26 consecutive patients with one-sided or bilateral lower leg lymphedema were assessed in a prospective clinical trial in the Clinica Godoy, Sao Jose do Rio Preto, Brazil. Six participants were male and 20 were female with ages ranging from 26 to 72 years (mean: 49 years). All patients with clinical diagnosis of grade II lower leg lymphedema regardless of the cause were included. Patients with a history of allergies, intolerance of compression mechanisms and those with infections, joint immobility and other conditions that might interfere with edema were excluded. All patients were evaluated by volumetry using the water displacement technique, at the beginning of treatment and weekly thereafter. The mechanism of compression was explained to all participants and they were advised about the need of frequent adjustments, how to adjust the stockings and the necessary care. At each consultation, volumetric variations, the patient’s tolerance to treatment, adverse events, correct usage and the need for adjustments were assessed. Major adjustments were made by a seamstress after evaluation by the treatment team. The data was input on a Microsoft Excel spreadsheet. The study was approved by the Research Ethics Committee and all patients signed informed consent forms. Quantitative variables are reported as means and standard deviation when the distribution was normal or medians and interquartile range when the data was asymmetric. The relationships of these variables in respect to the outcomes were compared using the Wilcoxon test with an alpha error of 5% being considered acceptable. Results: Forty-nine legs of the 26 participants with lymphedema were assessed. From week to week, both positive and negative variations were detected during the treatment using grosgrain stockings. In the first week, fifteen (30.61%) limbs increased in volume and 34 (69.38%) reduced in size. In the second week, five (10.20%) continued to increase and 44 (89.79%) reduced; in the third week four (8.16%) had further increases and 45 (91.83%) reduced and in the fourth week only three limbs (6.12%) continued to increase and 46 (93.87%) reduced in size. As a total, the reductions were statistically significant (p-value < 0.001) in all evaluations when the baseline is compared with the other weeks, the first week with the others and the second week with the others, but there was no significant difference between the third and fourth weeks (p-value = 0.07). Conclusion: The grosgrain stocking as monotherapy is effective in reducing swelling in the treatment of grade II lower leg lymphedema. Patients should receive guidance and be trained in respect to the correct usage of compression stockings. / Introdução: Na dificuldade de associação de terapias, o mecanismo de contenção é preferencialmente uma das escolhas como terapia única no tratamento do linfedema. Objetivo: O objetivo do presente estudo foi avaliar a eficácia da monoterapia por um mês na redução do volume do linfedema de membros inferiores, usando a meia de gorgurão com avaliação semanal. Casuística e Método: Foram avaliados em ensaio clínico, prospectivo, consecutivo 26 pacientes com linfedema de membros inferiores localizados abaixo do joelho uni ou bilateral, sendo seis do sexo masculino e 20 do sexo feminino. As idades variaram entre 26 a 72 anos e a média de idade 49 anos, na Clinica Godoy- São Jose do Rio Preto, em 2013. Foram inclusos pacientes com diagnóstico clínico de linfedema, grau II de membros inferiores, independente da causa, porém localizado abaixo do joelho. Exclusos pacientes com história de alergias e intolerância a mecanismos de contenção, processos infecciosos ativos, imobilidade articular e outras causas que pudessem interferir nos edemas em geral. Todos pacientes assinaram termo de consentimento informado e foram avaliados pela volumetria, por técnica de deslocamento d’água, no início do tratamento e semanalmente. Tiveram orientação sobre a meia, da necessidade de ajustes constantes, de como ajustá-la e de seus cuidados. Nos retornos eram avaliadas as variações volumétricas dos membros, a tolerância das meias, intercorrências, seu uso correto e a necessidade de ajustes que era realizada pela costureira, após a avaliação e orientação da equipe. Os dados foram catalogados numa planilha Excel. O estudo foi aprovado pelo Comitê de Ética em Pesquisa. Variáveis quantitativas foram descritas por média e desvio - padrão na presença de distribuição normal ou mediana e a amplitude interquartil na presença de distribuição assimétrica. A relação destas variáveis com a presença dos desfechos foi comparada utilizando-se o teste de Wilcoxon’s, considerando-se erro alfa de 5%. Resultados: Foram avaliados 49 membros inferiores em 26 pacientes com linfedema de membros inferiores e detectadas as variações positivas e negativas durante o tratamento com meia de gorgurão. Na primeira semana foi observado que em 15 membros (30,61%) houve aumento de volume e em 34 reduções de volume (69,38%); na segunda semana cinco (10,20 %) mantiveram o aumento e 44 (89,79%) tiveram redução; na terceira semana quatro (8,16%) manteve o aumento e 45 (91,83%) com redução e na quarta semana apenas três (6,12%) dos membros mantiveram aumento e 46 (93,87%) reduziram. As reduções foram significante estatisticamente valor p < 0,001 em todas as avaliações quando se comparou o início do tratamento com as demais semanas; a primeira semana com as demais; a segunda com as demais, porém a terceira comparada com a quarta não houve diferença significativa, valor p < 0,07. Conclusão: A meia de gorgurão como monoterapia é eficaz na redução do edema no tratamento do linfedema grau II localizado abaixo do joelho. Porém os pacientes devem ser orientados e treinados para o uso correto desta meia.
637

Armazenamento e reconstrução de imagens comprimidas via codificação e decodificação / Storage and reconstruction of images by coding and decoding

Travassos, Natalia Caroline Lopes [UNESP] 19 December 2016 (has links)
Submitted by NATALIA CAROLINE LOPES TRAVASSOS null (nataliacaroline2006@yahoo.com.br) on 2017-02-15T14:56:05Z No. of bitstreams: 1 natalia_dissertação.pdf: 3422187 bytes, checksum: 73f8b94c641709f43b16401694318651 (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2017-02-21T17:52:48Z (GMT) No. of bitstreams: 1 travassos_ncl_me_ilha.pdf: 3422187 bytes, checksum: 73f8b94c641709f43b16401694318651 (MD5) / Made available in DSpace on 2017-02-21T17:52:48Z (GMT). No. of bitstreams: 1 travassos_ncl_me_ilha.pdf: 3422187 bytes, checksum: 73f8b94c641709f43b16401694318651 (MD5) Previous issue date: 2016-12-19 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Este trabalho apresenta um algoritmo de codificação para imagens comprimidas que representa cada pixel de uma imagem e suas coordenadas por um único valor. Para cada pixel e suas coordenadas, esse valor único é armazenado em um vetor que é usado na reconstrução da imagem sem que sua qualidade seja comprometida. O método proposto apresenta melhorias em relação a dois outros algoritmos propostos anteriormente, sendo que um deles já é uma melhoria do primeiro. O algoritmo apresentado neste trabalho difere dos outros dois algoritmos estudados na diminuição significativa do espaço necessário para armazenamento das imagens, na determinação de uma taxa de compressão exata e na redução do tempo de processamento de decodificação. Um outro avanço apresentado foi a compressão de imagens coloridas utilizando a ferramenta wavemenu em conjunto com o algoritmo que determina a taxa de compressão. / This work presents an encoding algorithm for compressed images that represents each pixel of an image and its coordinates by a single value. For each pixel and its coordinates, this unique value is stored in a vector that is used in the reconstruction of the image without its quality being compromised. The proposed method presents improvements in relation to two other algorithms previously proposed, one of which is already an improvement for the first one. The algorithm presented in this work differs from the other ones in the foollowing characteristcs: by the significant reduction of the space required for image storage, the determination of an exact compression rate and the reduction of the decoding processing time. Another advancement was the compression of colored images using the tool wavemenu in improvement with the algorithm that determines the compression ratio.
638

Adéquation moteurs propres-carburants en combustion homogène : Étude expérimentale en machine à compression rapide et modélisation de carburants modèles types gazoles en milieu ultra-pauvre / A modelling and experimental study in a rapid compression machine of the low-temperature autoignition of a surrogate diesel fuel in lean conditions

Crochet, Moïse 12 December 2008 (has links)
La cinétique d'oxydation, d'auto-inflammation et de combustion en milieu pauvre et ultra-pauvre de trois hydrocarbures types représentatifs du comportement des carburants gazoles pour automobiles a été étudiée à haute pression (0,34 - 2,32 MPa) et basse température (600 - 950 K) : le n-butylbenzène pour les constituants aromatiques, le n-propylcyclohexane pour les naphténiques et le n-décane pour les paraffiniques. Une base exhaustive de données thermocinétiques expérimentales et théoriques a été élaborée. Les expériences d'auto-inflammation ont été réalisées avec une machine à compression rapide originale équipée d'une chaîne analytique performante. Des données globales de réactivité - délais et phénoménologie de l'auto-inflammation - ainsi que des données détaillées - identification et quantification des produits intermédiaires d'oxydation - ont permis de reconnaître et d'interpréter les voies prépondérantes d'auto-inflammation et de formation de polluants à basse température. Des schémas réactionnels complexes sont construits. Des modèles thermocinétiques sont développés et validés par les données originales. Des simulations numériques ont permis de prévoir l'effet de la recirculation des gaz brûlés (EGR) et le comportement en phase de compression. Ces travaux font partie d'un programme de quatre thèses destinées à élaborer les outils d'une optimisation du mode de combustion moteur par auto-inflammation homogène (mode HCCI pour Homogeneous Charge Compression Ignition), une alternative propre et prometteuse aux procédés conventionnels mais qui exige une connaissance approfondie des conditions d'auto-inflammation du carburant à basses températures. / The oxidation, autoignition and combustion of hydrocarbons were studied at high pressure (0,34 - 2,32 MPa) and low temperature (600 - 950 K) for lean and ultra-lean gas mixtures. Three hydrocarbons typical of the chemical structure of diesel fuel components were chosen: n-butylbenzene for the aromatics, n-propylcyclohexane for the naphtenes and n-decane for the paraffins. An exhaustive base of experimental and theoretical thermokinetic data has been built. Autoignition experiments were conducted in an original rapid compression machine fitted with efficient analytical attachments. Global and detailed data of reactivities were collected: autoignition delay times and phenomenology of the autoignitions, speciation and determination of oxidation intermediates. They allowed recognizing and interpreting the main reaction pathways leading to the low-temperature autoignition and pollutant formation. Complex reaction schemes and thermokinetic models were developed and validated with the new experimental data. Numerical simulations were done to predict the effect of the exhaust gas and compression phase. This work has been a part of a project of four theses to elaborate efficient tools to optimise the clean and attractive HCCI (Homogeneous Charge Compression Ignition) mode of combustion, which requires a deep understanding of the conditions of low-temperature autoignition.
639

X-ray diffraction studies of shock compressed bismuth using X-ray free electron lasers

Gorman, Martin Gerard January 2016 (has links)
The ability to diagnose the structure of a material at extreme conditions of high-pressure and high-temperature is fundamental to understanding its behaviour, especially since it was found that materials will adopt complex crystal structures at pressures in the Terapascal regime (1TPa). Static compression, using the diamond anvil cell coupled with synchrotron radiation has to date been the primary method for structural studies of materials at high pressure. However, dynamic compression is the only method capable of reaching pressures comparable to the conditions found in the interior of newly discovered exo-planets and gas giants where such exotic high-pressure behaviour is predicted to be commonplace among materials. While generating extreme conditions using shock compression has become a mature science, it has proved a considerable experimental challenge to directly observe and study such phase transformations that have been observed using static studies due to the lack of sufficiently bright X-ray sources. However, the commissioning of new 4th generation light sources known as free electron lasers now provide stable, ultrafast pulses of X-rays of unprecedented brightness allowing in situ structural studies of shock compressed materials and their phase transformation kinetics in unprecedented detail. Bismuth, with its highly complex phase diagram at modest pressures and temperatures, has been one of the most studied systems using both static and dynamic compression. Despite this, there has been no structural characterisation of the phases observed on shock compression and it is therefore the ideal candidate for the first structural studies using X-ray radiation from a free electron laser. Here, bismuth was shock compressed with an optical laser and probed in situ with X-ray radiation from a free electron laser. The evolution of the crystal structure (or lack there of) during compression and shock release are documented by taking snapshots of successive experiments, delayed in time. The melting of Bi on release from Bi-V was studied, with precise time scans showing the pressure releasing from high-pressure Bi-V phase until the melt curve is reached off-Hugoniot. Remarkable agreement with the equilibrium melt curve is found and the promise of this technique has for future off-Hugoniot melt curve studies at extreme conditions is discussed. In addition, shock melting studies of Bi were performed. The high-pressure Bi - V phase is observed to melt along the Hugoniot where melting is unambiguously identified with the emergence of a broad liquid-scattering signature. These measurements definitively pin down where the Hugoniot intersects the melt curve - a source of some disagreement in recent years. Evidence is also presented for a change in the local structure of the liquid on shock release. The impact of these results are discussed. Finally, a sequence of solid-solid phase transformations is observed on shock compression as well as shock release and is detected by distinct changes in the obtained diffraction patterns. The well established sequence of solid-solid phase transformations observed in previous static studies is not observed in our experiments. Rather, Bi is found to exist in some metastable structures instead of forming equilibrium phases. The implications these results have for observing reconstructive phase transformations in other materials on shock timescales are discussed.
640

Adaptive coding and rate control of video signals / CUHK electronic theses & dissertations collection

January 2015 (has links)
As the bandwidth has become much cheaper in the recent years, video applications are more popular than before. However, the demand of high video resolution, high frame rate, or high bit-depth has continued to increase more rapidly than the cost of video transmission and storage bandwidth. It requires more efficient compression techniques, and hence many international video coding standards are developed in the past decades such as the MPEG-1/2/4 part 2, H.264/MPEG-4 part 10 AVC and the latest High Efficiency Video Coding (HEVC) standards. The main objective of this thesis is to consider the problems in analyzing the characteristics of video signals and providing efficient compression and transmission solutions in both H.264/AVC and HEVC video systems. Three main parts of this work are briey summarized below. / The first part concerns transform coding. Transform coding has been widely used to remove spatial redundancy of prediction residuals in the modern video coding standards. However, since the residual blocks exhibit diverse characteristics in a video sequence, conventional sinusoidal transforms with fixed transform kernels may result in low coding efficiency. To tackle this problem, we propose a novel content adaptive transform framework for H.264/AVC-based video coding. We propose to utilize pixel rearrangement to dynamically adjust the transform kernels to adapt to the video signals. In addition, unlike the traditional adaptive transforms, the proposed method obtains the transform kernels from the reconstructed block, and hence it consumes only one logic indicator for each transform unit. Moreover, a spiral-scanning method is developed to reorder the transform coefficients for better entropy coding. Experimental results on the Key Technical Area (KTA) platform show that the proposed method can achieve a significant bits reduction under both the all-intra and low-delay configurations. / The second part investigates the next-generation video coding. Due to increase of display resolution from High-definition (HD) to Ultra-HD, how to efficiently compress the Ultra-HD signals are essential in the development of future video compression systems. High-resolution video coding benefits from a larger prediction block size and thereof transform and quantization of prediction residues. However, in the current HEVC video coding standard, the maximum coding tree unit (CTU) size is 64x64, which can limit a possible larger prediction block in Ultra-HD video coding, and hence cause negative effects on coding efficiency. Thus, we propose to extend CTU to a super coding unit (SCU) for next-generation video coding, and two separate coding structures are designed to encode a SCU, including Direct-CTU and SCU-to-CTU modes. In Direct-CTU, an SCU is first split into a number of predefined CTUs, and then, the best encoding parameters are searched from the current CTU to the possible minimum coding unit (MCU). Similarly, in SCU-to-CTU, the best encoding parameters are searched from SCU to CTU. In addition, the adaptive loop filter (ALF) and sample adaptive offset (SAO) methods are investigated in SCU based video coding framework. We propose to change the filtering control from SCU level to the coding unit (CU) level, and an improved CU level ALF signaling method is also proposed to further improve the coding efficiency. Furthermore, an adaptive SAO block method is also proposed, and this flexibility of SAO blocks can further improve the performance of the traditional method in the Ultra HD video coding. / In the last part, we explore the bit rate control of video transmission. Rate control serves as an important technique to regulate the bit rate of video transmission over a limited bandwidth and to maximize the overall video quality. Video quality fluctuation plays a key role in human visual perception, and hence many rate control algorithms have been widely developed to maintain a consistent quality for video communication. We propose a novel rate control framework based on the Lagrange multiplier in HEVC. With the assumption of constant quality control, a new relationship between the distortion and the Lagrange multiplier is established. Based on the proposed distortion model and buffer status, we obtain a computationally feasible solution to the problem of minimizing the distortion variation across video frames at the coding tree unit level. Extensive simulation results show that our method outperforms the HEVC rate control by providing a more accurate rate regulation, lower video quality fluctuation and stabler buffer fullness. / 近些年,隨著帶寬費用變得越來越便宜,各種視頻應用比以前更為流行了。然而,人們對于高視頻分辨率,高幀率,或更高比特像素的需求增加了視頻傳輸和存儲帶寬的成本。滿足這樣的需求需要更有效的壓縮技術,因此在過去的幾十年裏,很多國際視頻編碼標准被開發出來,例如MPEG-1/2/4 part2, H264/MPEG-4 part 10 AVC和最新高效視頻編碼標准(HEVC)。本論文的主要目的是研究視頻信號的特點,在H.264和HEVC視頻系統中提供高效的壓縮和傳輸解決方案。論文分三部分,簡要總結如下。 / 第壹部分涉及變換編碼。在現代視頻編碼標准中,變換編碼已被廣泛用于消除預測殘差的空間冗余度。然而,由于在視頻序列中的預測殘差塊有著不同的特性,傳統的變換采用固定變換矩陣可能會導致低的編碼效率。為了解決這個問題,我們提出了壺種新的基于內容自適應變換方案的視頻編碼框架。我們利用重排像素,動態調整的變換矩陣以適應當前的視頻信號。此外,與傳統的自適應變換不同之處在于,我們所提出的方法得到的變換矩陣不需要傳輸到解碼端,而它僅消耗壺個邏輯單元指示當前變換矩陣。此外,我們提出了相應的變換系數掃描方法以達到更有效的熵編碼。在關鍵技術領域(KTA)平台,實驗結果表明本方法可以有效的改善幀內和低延遲的配置下的編碼效率。 / 第二部分探討了新壹代視頻編碼。由于主流顯示分辨率從高清到超高清的變化,如何有效地壓縮超高清視頻信號是未來視頻壓縮技術發展的關鍵。超高分辨率視頻編碼的好處在于可從壹個更大的預測塊對其預測殘差進行變換和量化。然而,在目前HEVC視頻編碼標準,最大編碼榭單元尺寸(CTU)是64x64,其可能限制較大的預測塊,從而影響編碼效率。因此,我們提出了擴展CTU為SCU。其中編碼壹個SCU可能用到兩個獨立的編碼模式,包括Direct-CTU和SCU-to-CTU。在Direct-CTU模式中,SCU被分割成許多預定義的CTUs,然後,最佳的編碼參數搜索範圍為CTU到MCU。同樣,在SCU-to-CTU模式中,最佳的編碼參數搜索範圍是SCU到CTU。此外,自適應環路濾波器(ALF)和自適應采偏移(SAO)在新的SCU編碼框架下進行了研究。我們提出濾波控制從SCU級別更改為CU級別,並提出了新的ALF信號傳送方法進壹步提高傳統的方法在超高清視頻編碼的中性能。 / 在最後壹部分,我們探討了視頻傳輸中的碼率控制。碼率控制作為壹種重要的技術,在有限的帶寬條件下,以最大限度地提高整體的視頻質量。視頻質量波動在人眼視覺感知中起著至關重要的作用,因此許多碼率控制方法得到了廣泛的發展,以追求提供穩定的視頻通信質量。我們提出了壹個新基于HEVC拉格日乘數碼率控制框架。在平穩視頻質量的假設下,我們提出了壹種新的失真和拉格日乘子之間的關係。基于新提出的失真模型和緩沖區的狀態,我們得到壹個計算上可行的解決方案,以最大限度地減少在編碼榭單元級的視頻幀的失真變化。大量的仿真結果表明,我們的方法優于HEVC的碼率控制,它可以提供更精確的碼率調節,降低視頻質量波動,以及維護穩定的緩沖區占有率。 / Wang, Miaohui. / Thesis Ph.D. Chinese University of Hong Kong 2015. / Includes bibliographical references (leaves 158-164). / Abstracts and acknowledgements also in Chinese. / Title from PDF title page (viewed on 11, October, 2016). / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only.

Page generated in 0.0312 seconds