Spelling suggestions: "subject:"coefficients"" "subject:"eoefficients""
261 |
DCT-based Image/Video Compression: New Design PerspectivesSun, Chang January 2014 (has links)
To push the envelope of DCT-based lossy image/video compression, this thesis is motivated to revisit design of some fundamental blocks in image/video coding, ranging from source modelling, quantization table, quantizers, to entropy coding. Firstly, to better handle the heavy tail phenomenon commonly seen in DCT coefficients, a new model dubbed transparent composite model (TCM) is developed and justified. Given a sequence of DCT coefficients, the TCM first separates the tail from the main body of the sequence, and then uses a uniform distribution to model DCT coefficients in the heavy tail, while using a parametric distribution to model DCT coefficients in the main body. The separation boundary and other distribution parameters are estimated online via maximum likelihood (ML) estimation. Efficient online algorithms are proposed for parameter estimation and their convergence is also proved. When the parametric distribution is truncated Laplacian, the resulting TCM dubbed Laplacian TCM (LPTCM) not only achieves superior modeling accuracy with low estimation complexity, but also has a good capability of nonlinear data reduction by identifying and separating a DCT coefficient in the heavy tail (referred to as an outlier) from a DCT coefficient in the main body (referred to as an inlier). This in turn opens up opportunities for it to be used in DCT-based image compression.
Secondly, quantization table design is revisited for image/video coding where soft decision quantization (SDQ) is considered. Unlike conventional approaches where quantization table design is bundled with a specific encoding method, we assume optimal SDQ encoding and design a quantization table for the purpose of reconstruction. Under this assumption, we model transform coefficients across different frequencies as independently distributed random sources and apply the Shannon lower bound to approximate the rate distortion function of each source. We then show that a quantization table can be optimized in a way that the resulting distortion complies with certain behavior, yielding the so-called optimal distortion profile scheme (OptD). Guided by this new theoretical result, we present an efficient statistical-model-based algorithm using the Laplacian model to design quantization tables for DCT-based image compression. When applied to standard JPEG encoding, it provides more than 1.5 dB performance gain (in PSNR), with almost no extra burden on complexity. Compared with the state-of-the-art JPEG quantization table optimizer, the proposed algorithm offers an average 0.5 dB gain with computational complexity reduced by a factor of more than 2000 when SDQ is off, and a 0.1 dB performance gain or more with 85% of the complexity reduced when SDQ is on.
Thirdly, based on the LPTCM and OptD, we further propose an efficient non-predictive DCT-based image compression system, where the quantizers and entropy coding are completely re-designed, and the relative SDQ algorithm is also developed. The proposed system achieves overall coding results that are among the best and similar to those of H.264 or HEVC intra (predictive) coding, in terms of rate vs visual quality. On the other hand, in terms of rate vs objective quality, it significantly outperforms baseline JPEG by more than 4.3 dB on average, with a moderate increase on complexity, and ECEB, the state-of-the-art non-predictive image coding, by 0.75 dB when SDQ is off, with the same level of computational complexity, and by 1 dB when SDQ is on, at the cost of extra complexity. In comparison with H.264 intra coding, our system provides an overall 0.4 dB gain or so, with dramatically reduced computational complexity. It offers comparable or even better coding performance than HEVC intra coding in the high-rate region or for complicated images, but with only less than 5% of the encoding complexity of the latter. In addition, our proposed DCT-based image compression system also offers a multiresolution capability, which, together with its comparatively high coding efficiency and low complexity, makes it a good alternative for real-time image processing applications.
|
262 |
Application of VAX/VMS graphics for solving preliminary ship design problemsMcGowan, Gerald K. 12 1900 (has links)
Approved for public release; distribution is unlimited / The VAX/VMS UIS graphics library routines were used in the creation of a menu driven, interactive program which solves basic preliminary ship design problems. The program uses a menu with active mouse and keyboard to select options, enter data, and control program execution. At present, the program solves transverse and longitudinal static stability problems and predicts the effects of shifting weight in three planes. It also calculates the hydrodynamic derivatives for maneuvering performance and predicts the turning circle characteristics of the ship. Provisions for a hardcopy, detailed report are also included. Space has been allocated to include future program modules or user supplied programs.
|
263 |
Fluid-structure interaction of submerged shellsRandall, Richard John January 1990 (has links)
A general three-dimensional hydroelasticity theory for the evaluation of responses has been adapted to formulate hydrodynamic coefficients for submerged shell-type structures. The derivation of the theory has been presented and is placed in context with other methods of analysis. The ability of this form of analysis to offer an insight into the physical behaviour of practical systems is demonstrated. The influence of external boundaries and fluid viscosity was considered separately using a flexible cylinder as the model. When the surrounding fluid is water, viscosity was assessed to be significant for slender structural members and flexible pipes and in situations where the clearance to an outer casing was slight. To validate the three-dimensional hydroelasticity theory, predictions of resonance frequencies and mode shapes were compared, with measured data from trials undertaken in enclosed tanks. These data exhibited differences due to the position of the test structures in relation to free and fixed boundaries. The rationale of the testing programme and practical considerations of instrumentation, capture and storage of data are described in detail. At first sight a relatively unsophisticated analytical method appeared to offer better correlation with the measured data than the hydroelastic solution. This impression was mistaken, the agreement was merely fortuitous as only the hydroelastic approach is capable of reproducing-the trends recorded in the experiments. The significance of an accurate dynamic analysis using finite elements and the influence of physical factors such as buoyancy on the predicted results are also examined.
|
264 |
Some current issues in the statistical analysis of spilloversGumprecht, Daniela, Gumprecht, Nicole, Müller, Werner January 2003 (has links) (PDF)
Spillover phenomena are usually statistically estimated on the basis of regional and temporal panel data. In this paper we review and investigate exploratory and confirmatory statistical panel data techniques. We illustrate the methods by calculations in the stetting of the well known Research and Development Spillover study by Coe and Helpman (1995). It will be demonstrated that alternative estimation techniques that are well compatible with the data can lead to opposite conclusions. (author's abstract) / Series: Working Papers Series "Growth and Employment in Europe: Sustainability and Competitiveness"
|
265 |
Two-dimensional Finite Volume Weighted Essentially Non-oscillatory Euler Schemes With Uniform And Non-uniform Grid CoefficientsElfarra, Monier Ali 01 February 2005 (has links) (PDF)
In this thesis, Finite Volume Weighted Essentially Non-Oscillatory (FV-WENO) codes for one and two-dimensional discretised Euler equations are developed. The construction and application of the FV-WENO scheme and codes will be described. Also the effects of the grid coefficients as well as the effect of the Gaussian Quadrature on the solution have been tested and discussed.
WENO schemes are high order accurate schemes designed for problems with piecewise smooth solutions containing discontinuities. The key idea lies at the high approximation level, where a convex combination of all the candidate stencils is used with certain weights. Those weights are used to eliminate the stencils, which contain discontinuity. WENO schemes have been quite successful in applications, especially for problems containing both shocks and complicated smooth solution structures.
The applications tested in this thesis are the Diverging Nozzle, Shock Vortex Interaction, Supersonic Channel Flow, Flow over Bump, and supersonic Staggered Wedge Cascade.
The numerical solutions for the diverging nozzle and the supersonic channel flow are compared with the analytical solutions. The results for the shock vortex interaction are compared with the Roe scheme results. The results for the bump flow and the supersonic staggered cascade are compared with results from literature.
|
266 |
<原著>IRT 正規累積モデルに於ける等化係数の推定野口, 裕之, NOGUCHI, Hiroyuki 25 December 1989 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
|
267 |
Prediction and Estimation of Random FieldsKohli, Priya 2012 August 1900 (has links)
For a stationary two dimensional random field, we utilize the classical Kolmogorov-Wiener theory to develop prediction methodology which requires minimal assumptions on the dependence structure of the random field. We also provide solutions for several non-standard prediction problems which deals with the "modified past," in which a finite number of observations are added to the past. These non-standard prediction problems are motivated by the network site selection in the environmental and geostatistical applications. Unlike the time series situation, the prediction results for random fields seem to be expressible only in terms of the moving average parameters, and attempts to express them in terms of the autoregressive parameters lead to a new and mysterious projection operator which captures the nature of edge-effects. We put forward an approach for estimating the predictor coefficients by carrying out an extension of the exponential models. Through simulation studies and real data example, we demonstrate the impressive performance of our prediction method. To the best of our knowledge, the proposed method is the first to deliver a unified framework for forecasting random fields both in the time and spectral domain without making a subjective choice of the covariance structure.
Finally, we focus on the estimation of the hurst parameter for long range dependence stationary random fields, which draws its motivation from applications in the environmental and atmospheric processes. Current methods for estimation of the Hurst parameter include parametric models like fractional autoregressive integrated moving average models, and semiparametric estimators which are either inefficient or inconsistent. We propose a novel semiparametric estimator based on the fractional exponential spectrum. We develop three data-driven methods which can automatically select the optimal model order for the fractional exponential models. Extensive simulation studies and analysis of Mercer and Hall?s wheat data are used to illustrate the performance of the proposed estimator and model order selection criteria. The results show that our estimator outperforms existing estimators, including the GPH (Geweke and Porter-Hudak) estimator. We show that the proposed estimator is consistent, works for different definitions of long range dependent random fields, is computationally simple and is not susceptible to model misspecification or poor efficiency.
|
268 |
Functional calculus and coadjoint orbits.Raffoul, Raed Wissam, Mathematics & Statistics, Faculty of Science, UNSW January 2007 (has links)
Let G be a compact Lie group and let π be an irreducible representation of G of highest weight λ. We study the operator-valued Fourier transform of the product of the j-function and the pull-back of ?? by the exponential mapping. We show that the set of extremal points of the convex hull of the support of this distribution is the coadjoint orbit through ?? + ??. The singular support is furthermore the union of the coadjoint orbits through ?? + w??, as w runs through the Weyl group. Our methods involve the Weyl functional calculus for noncommuting operators, the Nelson algebra of operants and the geometry of the moment set for a Lie group representation. In particular, we re-obtain the Kirillov-Duflo correspondence for compact Lie groups, independently of character formulae. We also develop a "noncommutative" version of the Kirillov character formula, valid for noncentral trigonometric polynomials. This generalises work of Cazzaniga, 1992.
|
269 |
Complexity in systems and organisations problems of new systems' implementation /January 2005 (has links)
Thesis (Ph.D.)--University of Wollongong, 2005. / Typescript. Includes appendices. Bibliographical references: leaf 175-181.
|
270 |
Slab-geometry molecular dynamics simulations : development and application to calculation of activity coefficients, interfacial electrochemistry, and ion channel transport /Crozier, Paul S., January 2001 (has links) (PDF)
Thesis (Ph. D.)--Brigham Young University. Dept. of Chemical Engineering, 2001. / Includes bibliographical references.
|
Page generated in 0.0627 seconds