• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1349
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3045
  • 532
  • 465
  • 417
  • 410
  • 358
  • 328
  • 276
  • 265
  • 222
  • 219
  • 201
  • 169
  • 161
  • 158
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Best Linear Unbiased Estimation Fusion with Constraints

Zhang, Keshu 19 December 2003 (has links)
Estimation fusion, or data fusion for estimation, is the problem of how to best utilize useful information contained in multiple data sets for the purpose of estimating an unknown quantity — a parameter or a process. Estimation fusion with constraints gives rise to challenging theoretical problems given the observations from multiple geometrically dispersed sensors: Under dimensionality constraints, how to preprocess data at each local sensor to achieve the best estimation accuracy at the fusion center? Under communication bandwidth constraints, how to quantize local sensor data to minimize the estimation error at the fusion center? Under constraints on storage, how to optimally update state estimates at the fusion center with out-of-sequence measurements? Under constraints on storage, how to apply the out-of-sequence measurements (OOSM) update algorithm to multi-sensor multi-target tracking in clutter? The present work is devoted to the above topics by applying the best linear unbiased estimation (BLUE) fusion. We propose optimal data compression by reducing sensor data from a higher dimension to a lower dimension with minimal or no performance loss at the fusion center. For single-sensor and some particular multiple-sensor systems, we obtain the explicit optimal compression rule. For a multisensor system with a general dimensionality requirement, we propose the Gauss-Seidel iterative algorithm to search for the optimal compression rule. Another way to accomplish sensor data compression is to find an optimal sensor quantizer. Using BLUE fusion rules, we develop optimal sensor data quantization schemes according to the bit rate constraints in communication between each sensor and the fusion center. For a dynamic system, how to perform the state estimation and sensor quantization update simultaneously is also established, along with a closed form of a recursion for a linear system with additive white Gaussian noise. A globally optimal OOSM update algorithm and a constrained optimal update algorithm are derived to solve one-lag as well as multi-lag OOSM update problems. In order to extend the OOSM update algorithms to multisensor multitarget tracking in clutter, we also study the performance of OOSM update associated with the Probabilistic Data Association (PDA) algorithm.
312

Rate distortion analysis, optimization, and control in video coding. / CUHK electronic theses & dissertations collection

January 2007 (has links)
Another objective of this work is to study the perceptual optimized video object coding. Since MPEG-4 treats a scene as a composition of video objects that are separately encoded and decoded, such a flexible video coding framework makes it possible to code different video objects with different priorities. It is necessary to analyze the priorities of video objects according to their intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. An object-level visual attention model is developed to automatically obtain the visual attention information of video objects. The visual attention values of video objects are calculated and incorporated in the newly developed dynamic bit allocation mechanism to improve the objective quality of the high priority objects such that the perceptual quality of the overall picture can be maximized. / As strict rate control algorithms used in video coding sacrifice the quality consistency, the rate distortion tradeoff is important to achieve a balance between the bit rate and quality. A novel separable rate distortion modeling method is proposed to analyze the rate distortion characteristics of the color video signal. This method provides higher estimation accuracy when compared to the non-separable modeling method. To achieve rate distortion tradeoff in H.264/AVC, a new control strategy is presented. The feedback from the encoder buffer is analyzed by a control-theoretic adaptation approach to avoid buffer overflow and underflow. A novel rate distortion tradeoff controller is designed by considering both the quality variation and buffer fluctuation. Smooth video quality is achieved and the relevant constraints are satisfied. / Due to the unique features of the video object coding such as both texture and shape introducing distortions and video objects being of arbitrarily shapes, the rate distortion analysis and optimization strategies are different from the traditional rectangular frame-based techniques. Two new rate distortion modeling methods are proposed for the shape coding. The first one is a linear rate distortion modeling method. The computational complexity is low and the estimation is accurate. To further improve the modeling performance, a novel statistical learning based method is proposed by incorporating shape features to provide rate distortion analysis for the shape coding. Therefore, a joint texture-shape rate distortion modeling approach is derived by integrating the texture and shape rate distortion models. The new joint texture-shape distortion models provide the basis for optimal bit allocation for the video object coding to minimize the coding distortion with the bit rate constraint and stabilize the buffer fullness. The major contribution of this optimal bit allocation scheme is to provide a unified solution for the following two problems: how to allocate bits between the texture and shape and how to distribute the hit budget for multiple video objects, simultaneously. / This thesis addresses rate distortion analysis, optimization, and control problems in video coding. These rate distortion issues not only provide the theoretical background but also are concerned with the practical design for video coding systems. The main objective of this thesis is to consider the problems associated with analyzing the rate distortion characteristics of the video source and providing optimal solutions or tradeoffs for the rate and distortion in video coding systems. More specifically this thesis focuses on both the object-based video coding system, MPEG-4, and the rectangular frame-based video coding system, H.264/AVC. / Chen, Zhenzhong. / "July 2007." / Adviser: King Ngi Ngan. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1194. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 225-247). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
313

A robust low bit rate quad-band excitation LSP vocoder.

January 1994 (has links)
by Chiu Kim Ming. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 103-108). / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Speech production --- p.2 / Chapter 1.2 --- Low bit rate speech coding --- p.4 / Chapter Chapter 2 --- Speech analysis & synthesis --- p.8 / Chapter 2.1 --- Linear prediction of speech signal --- p.8 / Chapter 2.2 --- LPC vocoder --- p.11 / Chapter 2.2.1 --- Pitch and voiced/unvoiced decision --- p.11 / Chapter 2.2.2 --- Spectral envelope representation --- p.15 / Chapter 2.3 --- Excitation --- p.16 / Chapter 2.3.1 --- Regular pulse excitation and Multipulse excitation --- p.16 / Chapter 2.3.2 --- Coded excitation and vector sum excitation --- p.19 / Chapter 2.4 --- Multiband excitation --- p.22 / Chapter 2.5 --- Multiband excitation vocoder --- p.25 / Chapter Chapter 3 --- Dual-band and Quad-band excitation --- p.31 / Chapter 3.1 --- Dual-band excitation --- p.31 / Chapter 3.2 --- Quad-band excitation --- p.37 / Chapter 3.3 --- Parameters determination --- p.41 / Chapter 3.3.1 --- Pitch detection --- p.41 / Chapter 3.3.2 --- Voiced/unvoiced pattern generation --- p.43 / Chapter 3.4 --- Excitation generation --- p.47 / Chapter Chapter 4 --- A low bit rate Quad-Band Excitation LSP Vocoder --- p.51 / Chapter 4.1 --- Architecture of QBELSP vocoder --- p.51 / Chapter 4.2 --- Coding of excitation parameters --- p.58 / Chapter 4.2.1 --- Coding of pitch value --- p.58 / Chapter 4.2.2 --- Coding of voiced/unvoiced pattern --- p.60 / Chapter 4.3 --- Spectral envelope estimation and coding --- p.62 / Chapter 4.3.1 --- Spectral envelope & the gain value --- p.62 / Chapter 4.3.2 --- Line Spectral Pairs (LSP) --- p.63 / Chapter 4.3.3 --- Coding of LSP frequencies --- p.68 / Chapter 4.3.4 --- Coding of gain value --- p.77 / Chapter Chapter 5 --- Performance evaluation --- p.80 / Chapter 5.1 --- Spectral analysis --- p.80 / Chapter 5.2 --- Subjective listening test --- p.93 / Chapter 5.2.1 --- Mean Opinion Score (MOS) --- p.93 / Chapter 5.2.2 --- Diagnostic Rhyme Test (DRT) --- p.96 / Chapter Chapter 6 --- Conclusions and discussions --- p.99 / References --- p.103 / Appendix A Subroutine of pitch detection --- p.A-I - A-III / Appendix B Subroutine of voiced/unvoiced decision --- p.B-I - B-V / Appendix C Subroutine of LPC coefficients calculation using Durbin's recursive method --- p.C-I - C-II / Appendix D Subroutine of LSP calculation using Chebyshev Polynomials --- p.D-I - D-III / Appendix E Single syllable word pairs for Diagnostic Rhyme Test --- p.E-I
314

Analysis and optimisation of postbuckled damage tolerant composite laminates

Rhead, Andrew T. January 2009 (has links)
Barely Visible Impact Damage (BVID) can occur when laminated composite material is subject to impact, i.e. from runway debris or dropped tools, and may result in a significant reduction in the compressive strength of composite structures. A component containing BVID subjected to compression may fail via a number of mechanisms. However, it is assumed that the impact damage problems to be modelled will fail by delamination buckling leading to propagation of damage away from the original site. This precludes problems where the initial mechanism of failure is via kink banding or buckling of the full laminate. An analytical model is presented, for application to various composite structures, which predicts the level of compressive strain below which growth of BVID following local buckling of a delaminated sublaminate will not occur. The model is capable of predicting the critical through-thickness level for delamination, the stability of delamination growth, the sensitivity to experimental error in geometric measurements of the damage area and additionally establishes properties desirable for laminates optimised for damage tolerance. Problems treated with the model are split into two impact categories; ‘face’ (i.e. an out-of-plane skin impact) and ‘free edge’ (i.e. an in-plane stiffener edge impact) and two compressive loading regimes; ‘static’ and ‘fatigue’. Analytical results for static and fatigue compression of face impacted plates show an agreement of threshold strain to within 4% and 17% of experimental values respectively. In particular, for impacts to the skin under a stiffener subject to static loading the model is accurate to within 5%. An optimised laminate stacking sequence has shown an experimental increase of up to 29% in static strength can be achieved in comparison to a baseline configuration. Finally, compression testing has been undertaken on three coupons in order to validate an analysis of static free edge problems. Analytical results are, on average, within 10% of experimental results. An optimised laminate is theoretically predicted to increase static compression after free edge impact strength by at least 35%.
315

A study of the transmission of VBR encoded video over ATM networks.

January 1997 (has links)
by Ngai Li. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (leaves 66-69). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Video Compression and Transport --- p.2 / Chapter 1.2 --- Research Contributions --- p.6 / Chapter 1.2.1 --- Joint Rate Control of VBR Encoded Video --- p.6 / Chapter 1.2.2 --- Transporting VBR Video on LB Controlled Channel --- p.7 / Chapter 1.3 --- Organization of Thesis --- p.7 / Chapter 2 --- Preliminary --- p.9 / Chapter 2.1 --- Statistical Characteristics of MPEG-1 Encoded Video --- p.9 / Chapter 2.2 --- Temporal and Spatial Smoothing --- p.14 / Chapter 2.2.1 --- Temporal Smoothing --- p.14 / Chapter 2.2.2 --- Spatial Smoothing --- p.15 / Chapter 2.3 --- A Single Source Control-Theoretic Framework for VBR-to-CBR Video Adaptation --- p.16 / Chapter 3 --- Joint Rate Control of VBR Encoded Video --- p.19 / Chapter 3.1 --- Analytical Models --- p.21 / Chapter 3.2 --- Analysis --- p.27 / Chapter 3.2.1 --- Stable Region --- p.29 / Chapter 3.2.2 --- Final Value of the State Variables --- p.33 / Chapter 3.2.3 --- Peak Values of Buffer-occupancy Deviation and Image- quality Fluctuation --- p.35 / Chapter 3.2.4 --- SAE of Buffer-occupancy Deviation and Image-quality Fluc- tuation --- p.42 / Chapter 3.3 --- Experimental Results --- p.43 / Chapter 3.4 --- Concluding Remarks --- p.48 / Chapter 4 --- Transporting VBR Video on LB Controlled Channel --- p.50 / Chapter 4.1 --- Leaky Bucket Access Control --- p.51 / Chapter 4.2 --- Greedy Token-usage Strategy --- p.53 / Chapter 4.3 --- Non-greedy Token-usage Strategy --- p.57 / Chapter 4.4 --- Concluding Remarks --- p.60 / Chapter 5 --- Conclusions --- p.62 / Chapter 5.1 --- Joint Rate Control of Multiple VBR Videos --- p.62 / Chapter 5.2 --- LB Video Compression --- p.63 / Chapter 5.3 --- Further Study --- p.64 / Chapter 5.4 --- Publications --- p.65 / Bibliography --- p.65
316

Robust header compression in 4G networks

Santos, António Pedro Freitas Fortuna dos January 2007 (has links)
Tese de mestrado. Redes e Serviços de Comunicação. Faculdade de Engenharia. Universidade do Porto. 2007
317

An Analysis of Approaches to Efficient Hardware Realization of Image Compression Algorithms

Iravani, Kamran 27 October 1994 (has links)
In this thesis an attempt has been made to develop a fast algorithm to compress images. The Reed-Muller compression algorithm which was introduced by Reddy & Pai [3] is fast, but the compression factor is too low when compared to the other methods. In this thesis first research has been done to improve this method by generalizing the Reed-Muller transform to the fixed polarity Reed-Muller form. This thesis shows that the Fixed Polarity Reed-Muller transform does not improve the compression factor enough to warrant its use as an image compression method. The paper, by Reddy & Pai [3], on Reed-Muller image compression has been criticized, and it was shown that some crucial errors in this paper make it impossible to evaluate the quality and compression factors of their approach. Finally a simple and fast method for image compression has been introduced. This method has taken advantage of the high correlation between the adjacent pixels of an image. If the matrix of pixel values of an image is divided into bit planes from the Most Significant Bit (MSB) plane to the Least Significant Bit (LSB) plane, most of the adjacent bits in the MSB planes (MSB, 2nd MSB, 3rd MSB and 4th MSB) are the same. Using this fact a method has been developed by Xoring the adjacent lines of the MSBs planes bit by bit, and Xoring the resulting planes bit by bit. It has been shown that this method gives a much better compression factor, and can be realized by much simpler hardware compared to Reed-Muller image compression method.
318

Explorations In Searching Compressed Nucleic Acid And Protein Sequence Databases And Their Cooperatively-Compressed Indices

Gardner-Stephen, Paul Mark, paul.gardner-stephen@flinders.edu.au January 2008 (has links)
Nucleic acid and protein databases such as GenBank are growing at a rate that perhaps eclipses even Moore’s Law of increase in computational power. This poses a problem for the biological sciences, which have become increasingly dependant on searching and manipulating these databases. It was once reasonably practical to perform exhaustive searches of these databases, for example using the algorithm described by Smith and Waterman, however it has been many years since this was the case. This has led to the development of a series of search algorithms, such as FASTA, BLAST and BLAT, that are each successively faster, but at similarly successive costs in terms of thoroughness. Attempts have been made to remedy this problem by devising search algorithms that are both fast and thorough. An example is CAFE, which seeks to construct a search system with a sub-linear relationship between search time and database size, and argues that this property must be present for any search system to be successful in the long term. This dissertation explores this notion by seeking to construct a search system that takes advantage of the growing redundancy in databases such as GenBank in order to reduce both the search time and the space required to store the databases and their indices, while preserving or increasing the thoroughness of the search. The result is the creation and implementation of new genomic sequence search and alignment, database compression, and index compression algorithms and systems that make progress toward resolving the problem of reducing search speed and space requirements while improving sensitivity. However, success is tempered by the need for databases with adequate local redundancy, and the computational cost of these algorithms when servicing un-batched queries.
319

Scanline calculation of radial influence for image processing

Ilbery, Peter William Mitchell, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links)
Efficient methods for the calculation of radial influence are described and applied to two image processing problems, digital halftoning and mixed content image compression. The methods operate recursively on scanlines of image values, spreading intensity from scanline to scanline in proportions approximating a Cauchy distribution. For error diffusion halftoning, experiments show that this recursive scanline spreading provides an ideal pattern of distribution of error. Error diffusion using masks generated to provide this distribution of error alleviate error diffusion "worm" artifacts. The recursive scanline by scanline application of a spreading filter and a complementary filter can be used to reconstruct an image from its horizontal and vertical pixel difference values. When combined with the use of a downsampled image the reconstruction is robust to incomplete and quantized pixel difference data. Such gradient field integration methods are described in detail proceeding from representation of images by gradient values along contours through to a variety of efficient algorithms. Comparisons show that this form of gradient field integration by convolution provides reduced distortion compared to other high speed gradient integration methods. The reduced distortion can be attributed to success in approximating a radial pattern of influence. An approach to edge-based image compression is proposed using integration of gradient data along edge contours and regularly sampled low resolution image data. This edge-based image compression model is similar to previous sketch based image coding methods but allows a simple and efficient calculation of an edge-based approximation image. A low complexity implementation of this approach to compression is described. The implementation extracts and represents gradient data along edge contours as pixel differences and calculates an approximate image by performing integration of pixel difference data by scanline convolution. The implementation was developed as a prototype for compression of mixed content image data in printing systems. Compression results are reported and strengths and weaknesses of the implementation are identified.
320

Listless zerotree image and video coding / Wen-Kuo Lin.

Lin, Wen-Kuo January 2001 (has links)
Includes bibliographical references (leaves 199-214) / xxx, 214 leaves : ill. (some col.), plates (col.) ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 2002

Page generated in 0.0211 seconds