• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 55
  • 28
  • 26
  • 13
  • 12
  • 12
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 449
  • 82
  • 54
  • 49
  • 48
  • 45
  • 44
  • 44
  • 40
  • 39
  • 36
  • 35
  • 33
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Rate-Distortion Performance And Complexity Optimized Structured Vector Quantization

Chatterjee, Saikat 07 1900 (has links)
Although vector quantization (VQ) is an established topic in communication, its practical utility has been limited due to (i) prohibitive complexity for higher quality and bit-rate, (ii) structured VQ methods which are not analyzed for optimum performance, (iii) difficulty of mapping theoretical performance of mean square error (MSE) to perceptual measures. However, an ever increasing demand for various source signal compression, points to VQ as the inevitable choice for high efficiency. This thesis addresses all the three above issues, utilizing the power of parametric stochastic modeling of the signal source, viz., Gaussian mixture model (GMM) and proposes new solutions. Addressing some of the new requirements of source coding in network applications, the thesis also presents solutions for scalable bit-rate, rate-independent complexity and decoder scalability. While structured VQ is a necessity to reduce the complexity, we have developed, analyzed and compared three different schemes of compensation for the loss due to structured VQ. Focusing on the widely used methods of split VQ (SVQ) and KLT based transform domain scalar quantization (TrSQ), we develop expressions for their optimum performance using high rate quantization theory. We propose the use of conditional PDF based SVQ (CSVQ) to compensate for the split loss in SVQ and analytically show that it achieves coding gain over SVQ. Using the analytical expressions of complexity, an algorithm to choose the optimum splits is proposed. We analyze these techniques for their complexity as well as perceptual distortion measure, considering the specific case of quantizing the wide band speech line spectrum frequency (LSF) parameters. Using natural speech data, it is shown that the new conditional PDF based methods provide better perceptual distortion performance than the traditional methods. Exploring the use of GMMs for the source, we take the approach of separately estimating the GMM parameters and then use the high rate quantization theory in a simplified manner to derive closed form expressions for optimum MSE performance. This has led to the development of non-linear prediction for compensating the split loss (in contrast to the linear prediction using a Gaussian model). We show that the GMM approach can improve the recently proposed adaptive VQ scheme of switched SVQ (SSVQ). We derive the optimum performance expressions for SSVQ, in both variable bit rate and fixed bit rate formats, using the simplified approach of GMM in high rate theory. As a third scheme for recovering the split loss in SVQ and reduce the complexity, we propose a two stage SVQ (TsSVQ), which is analyzed for minimum complexity as well as perceptual distortion. Utilizing the low complexity of transform domain SVQ (TrSVQ) as well as the two stage approach in a universal coding framework, it is shown that we can achieve low complexity as well as better performance than SSVQ. Further, the combination of GMM and universal coding led to the development of a highly scalable coder which can provide both bit-rate scalability, decoder scalability and rate-independent low complexity. Also, the perceptual distortion performance is comparable to that of SSVQ. Since GMM is a generic source model, we develop a new method of predicting the performance bound for perceptual distortion using VQ. Applying this method to LSF quantization, the minimum bit rates for quantizing telephone band LSF (TB-LSF) and wideband LSF (WB-LSF) are derived.
42

The General Quantization Problem for Distributions with Regular Support

Pötzelberger, Klaus January 1999 (has links) (PDF)
We study the asymptotic behavior of the quantization error for general information functions and prove results for distributions P with regular support. We characterize the information functions for which the uniform distribution on the set of prototypes converges weakly to P. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
43

The Consistency ot the Empirical Quantization Error

Pötzelberger, Klaus January 1999 (has links) (PDF)
We study the empirical quantization error in case the number of prototypes increases with the size of the sample. We present a proof of the consistency of the empirical quantization error and of corresponding estimators of the quantization dimensions of distributions. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
44

Two Variants of Self-Organizing Map and Their Applications in Image Quantization and Compression

Wang, Chao-huang 22 July 2009 (has links)
The self-organizing map (SOM) is an unsupervised learning algorithm which has been successfully applied to various applications. One of advantages of SOM is it maintains an incremental property to handle data on the fly. In the last several decades, there have been variants of SOM used in many application domains. In this dissertation, two new SOM algorithms are developed for image quantization and compression. The first algorithm is a sample-size adaptive SOM algorithm that can be used for color quantization of images to adapt to the variations of network parameters and training sample size. The sweep size of neighborhood function is modulated by the size of the training data. In addition, the minimax distortion principle which is modulated by training sample size is used to search the winning neuron. Based on the sample-size adaptive self-organizing map, we use the sampling ratio of training data, rather than the conventional weight change between adjacent sweeps, as a stop criterion. As a result, it can significantly speed up the learning process. Experimental results show that the proposed sample-size adaptive SOM achieves much better PSNR quality, and smaller PSNR variation under various combinations of network parameters and image size. The second algorithm is a novel classified SOM method for edge preserving quantization of images using an adaptive subcodebook and weighted learning rate. The subcodebook sizes of two classes are automatically adjusted in training iterations based on modified partial distortions that can be estimated incrementally. The proposed weighted learning rate updates the neuron efficiently no matter of how large the weighting factor is. Experimental results show that the proposed classified SOM method achieves better quality of reconstructed edge blocks and more spread out codebook and incurs a significantly less computational cost as compared to the competing methods.
45

DCT-based Image/Video Compression: New Design Perspectives

Sun, Chang January 2014 (has links)
To push the envelope of DCT-based lossy image/video compression, this thesis is motivated to revisit design of some fundamental blocks in image/video coding, ranging from source modelling, quantization table, quantizers, to entropy coding. Firstly, to better handle the heavy tail phenomenon commonly seen in DCT coefficients, a new model dubbed transparent composite model (TCM) is developed and justified. Given a sequence of DCT coefficients, the TCM first separates the tail from the main body of the sequence, and then uses a uniform distribution to model DCT coefficients in the heavy tail, while using a parametric distribution to model DCT coefficients in the main body. The separation boundary and other distribution parameters are estimated online via maximum likelihood (ML) estimation. Efficient online algorithms are proposed for parameter estimation and their convergence is also proved. When the parametric distribution is truncated Laplacian, the resulting TCM dubbed Laplacian TCM (LPTCM) not only achieves superior modeling accuracy with low estimation complexity, but also has a good capability of nonlinear data reduction by identifying and separating a DCT coefficient in the heavy tail (referred to as an outlier) from a DCT coefficient in the main body (referred to as an inlier). This in turn opens up opportunities for it to be used in DCT-based image compression. Secondly, quantization table design is revisited for image/video coding where soft decision quantization (SDQ) is considered. Unlike conventional approaches where quantization table design is bundled with a specific encoding method, we assume optimal SDQ encoding and design a quantization table for the purpose of reconstruction. Under this assumption, we model transform coefficients across different frequencies as independently distributed random sources and apply the Shannon lower bound to approximate the rate distortion function of each source. We then show that a quantization table can be optimized in a way that the resulting distortion complies with certain behavior, yielding the so-called optimal distortion profile scheme (OptD). Guided by this new theoretical result, we present an efficient statistical-model-based algorithm using the Laplacian model to design quantization tables for DCT-based image compression. When applied to standard JPEG encoding, it provides more than 1.5 dB performance gain (in PSNR), with almost no extra burden on complexity. Compared with the state-of-the-art JPEG quantization table optimizer, the proposed algorithm offers an average 0.5 dB gain with computational complexity reduced by a factor of more than 2000 when SDQ is off, and a 0.1 dB performance gain or more with 85% of the complexity reduced when SDQ is on. Thirdly, based on the LPTCM and OptD, we further propose an efficient non-predictive DCT-based image compression system, where the quantizers and entropy coding are completely re-designed, and the relative SDQ algorithm is also developed. The proposed system achieves overall coding results that are among the best and similar to those of H.264 or HEVC intra (predictive) coding, in terms of rate vs visual quality. On the other hand, in terms of rate vs objective quality, it significantly outperforms baseline JPEG by more than 4.3 dB on average, with a moderate increase on complexity, and ECEB, the state-of-the-art non-predictive image coding, by 0.75 dB when SDQ is off, with the same level of computational complexity, and by 1 dB when SDQ is on, at the cost of extra complexity. In comparison with H.264 intra coding, our system provides an overall 0.4 dB gain or so, with dramatically reduced computational complexity. It offers comparable or even better coding performance than HEVC intra coding in the high-rate region or for complicated images, but with only less than 5% of the encoding complexity of the latter. In addition, our proposed DCT-based image compression system also offers a multiresolution capability, which, together with its comparatively high coding efficiency and low complexity, makes it a good alternative for real-time image processing applications.
46

Construção geométrica de \"star-product\" integral em espaços simpléticos simétricos não compactos / Geometric construction of \"star-product\" integral on symplectic symmetric spaces not compact

John Beiro Moreno Barrios 13 March 2013 (has links)
A quantização geométrica e um método desenvolvido para prover uma construção geométrica que relacione a mecânica clássica com a quântica. O primeiro passo consiste em apresentar uma forma simplética, \'omega\'!, sobre uma variedade simplética, M, como a forma curvatura da conexão abla de um brado linear, L, sobre M. As funções sobre M operam como seções de L. Mas o espaço de todas as seções é grande demais. Queremos considerar seções constantes em certa direção, com respeito a derivada covariante dada por abla, e para isso precisamos o conceito de polarizações, essas seções são chamadas de seções polarizadas. Para obter uma estrutura de espaco de Hilbert nestas seções, precisamos de certos objetos chamados de meias densidades. Além disso, também temos um empareamento sesquilinear entre seções de polarizações diferentes. Neste trabalho, primeiramente consideraremos o empareamento para seções polarizadas adaptadas a polarizações reais não transversais, como método para obter aplicações integrais entre estes espaços de Hilbert que em combinação com a convolução do par grupóide M x \' M BARRA\', pode definir um produto integral de funções definidas na variedade simplética. Este produto, no caso do plano euclidiano e do plano de Bieliavsky, coincide com produto de Weyl integral e o produto de Bieliavsky, respectivamente. Jáa no caso do plano hiperbólico, este tipo de polarizações reais não são transversais nem são não transversais, dessa forma, escolhemos o empareamento entre uma polarização real e uma polarização holomorfa do par grupóide, as quais são transversais, para obter um produto integral no plano hiperbólico, que no caso do plano euclidiano e o produto de Weyl / The geometric quantization is a method developed to provide a geometrical construction relating classical to quantum mechanics. The first step consists of realizing the symplectic form, \'omega\', on a symplectic manifold, M, as the curvature form of a line bundle, L, over M. The functions on M then operate as sections of L. However, the space of all sections of L is too large. One wants to consider sections which are constant in certain directions (polarized sections) and for that one needs to introduce the concept of a polarization. To get a Hilbert space structure on the polarized sections, one needs to consider objects known as half densities. In this work, first we consider a sesquilinear pairing between objects associated to certain different polarizations, which are nontransverse real polarizations, to obtain integral applications between their associated Hilbert spaces, and to use the convolution of the pair groupoid M x \' M BARRA\' to obtain an integral product of functions on M. In the euclidian plane case, we recover the integral Weyl product and, in the Bieliavsky plane case, we obtain the Bieliavsky product. On the other hand, for the hyperbolic plane, such real polarizations are neither transverse nor nontransverse, so we use the pairing between a real polarization and a holomorphic polarization, which are transverse polarizations on the pair groupoid, to obtain an integral product of functions on the hyperbolic plane. This same procedure, in the euclidian plane case, also produces the integral Weyl product
47

Geometric Quantization

Hedlund, William January 2017 (has links)
We formulate a process of quantization of classical mechanics, from a symplecticperspective. The Dirac quantization axioms are stated, and a satisfactory prequantizationmap is constructed using a complex line bundle. Using polarization, it isdetermined which prequantum states and observables can be fully quantized. Themathematical concepts of symplectic geometry, fibre bundles, and distributions are exposedto the degree to which they occur in the quantization process. Quantizationsof a cotangent bundle and a sphere are described, using real and K¨ahler polarizations,respectively.
48

Compact ConvNets with Ternary Weights and Binary Activations

Holesovsky, Ondrej January 2017 (has links)
Compact architectures, ternary weights and binary activations are two methods suitable for making neural networks more efficient. We introduce a) a dithering binary activation which improves accuracy of ternary weight networks with binary activations by randomizing quantization error, and b) a method of implementing ternary weight networks with binary activations using binary operations. Despite these new approaches, training a compact SqueezeNet architecture with ternary weights and full precision activations on ImageNet degrades classification accuracy significantly more than when training a less compact architecture the same way. Therefore ternary weights in their current form cannot be called the best method for reducing network size. However, the effect of weight decay on ternary weight network training should be investigated more in order to have more certainty in this finding. / Kompakta arkitekturer, ternära vikter och binära aktiveringar är två metoder som är lämpliga för att göra neurala nätverk effektivare. Vi introducerar a) en dithering binär aktivering som förbättrar noggrannheten av ternärviktsnätverk med binära aktiveringar genom randomisering av kvantiseringsfel, och b) en metod för genomförande ternärviktsnätverk med binära aktiveringar med användning av binära operationer. Trots dessa nya metoder, att träna en kompakt SqueezeNet-arkitektur med ternära vikter och fullprecisionaktiveringar på ImageNet försämrar klassificeringsnoggrannheten betydligt mer än om man tränar en mindre kompakt arkitektur på samma sätt. Därför kan ternära vikter i deras nuvarande form inte kallas bästa sättet att minska nätverksstorleken. Emellertid, effekten av weight decay på träning av ternärviktsnätverk bör undersökas mer för att få större säkerhet i detta resultat.
49

Energy-efficient Neuromorphic Computing for Resource-constrained Internet of Things Devices

Liu, Shiya 03 November 2023 (has links)
Due to the limited computation and storage resources of Internet of Things (IoT) devices, many emerging intelligent applications based on deep learning techniques heavily depend on cloud computing for computation and storage. However, cloud computing faces technical issues with long latency, poor reliability, and weak privacy, resulting in the need for on-device computation and storage. Also, on-device computation is essential for many time-critical applications, which require real-time data processing and energy-efficient. Furthermore, the escalating requirements for on-device processing are driven by network bandwidth limitations and consumer anticipations concerning data privacy and user experience. In the realm of computing, there is a growing interest in exploring novel technologies that can facilitate ongoing advancements in performance. Of the various prospective avenues, the field of neuromorphic computing has garnered significant recognition as a crucial means to achieve fast and energy-efficient machine intelligence applications for IoT devices. The programming of neuromorphic computing hardware typically involves the construction of a spiking neural network (SNN) capable of being deployed onto the designated neuromorphic hardware. This dissertation presents a range of methodologies aimed at enhancing the precision and energy efficiency of SNNs. To be more precise, these advancements are achieved by incorporating four essential methods. The first method is the quantization of neural networks through knowledge distillation. This work introduces a quantization technique that effectively reduces the computational and storage resource requirements of a model while minimizing the loss of accuracy. To further enhance the reduction of quantization errors, the second method introduces a novel quantization-aware training algorithm specifically designed for training quantized spiking neural network (SNN) models intended for execution on the Loihi chip, a specialized neuromorphic computing chip. SNNs generally exhibit lower accuracy performance compared to deep neural networks (DNNs). The third approach introduces a DNN-SNN co-learning algorithm, which enhances the performance of SNN models by leveraging knowledge obtained from DNN models. The design of the neural architecture plays a vital role in enhancing the accuracy and energy efficiency of an SNN model. The fourth method presents a novel neural architecture search algorithm specifically tailored for SNNs on the Loihi chip. The method selects an optimal architecture based on gradients induced by the architecture at initialization across different data samples without the need for training the architecture. To demonstrate the effectiveness and performance across diverse machine intelligence applications, our methods are evaluated through (i) image classification, (ii) spectrum sensing, and (iii) modulation symbol detection. / Doctor of Philosophy / In the emerging Internet of Things (IoT), our everyday devices, from smart home gadgets to wearables, can autonomously make intelligent decisions. However, due to their limited computing power and storage, many IoT devices heavily depend on cloud computing, which brings along issues like slow response times, privacy concerns, and unreliable connections. Neuromorphic computing is a recognized and crucial approach for achieving fast and energy-efficient machine intelligence applications in IoT devices. Inspired by the human brain's neural networks, this cutting-edge approach allows devices to perform complex tasks efficiently and in real-time. The programming of this neuromorphic hardware involves creating spiking neural networks (SNNs). This dissertation presents several innovative methods to improve the precision and energy efficiency of these SNNs. Firstly, a technique called "quantization" reduces the computational and storage requirements of models without sacrificing accuracy. Secondly, a unique training algorithm is designed to enhance the performance of SNN models. Thirdly, a clever co-learning algorithm allows SNN models to learn from traditional deep neural networks (DNNs), further improving their accuracy. Lastly, a novel neural architecture search algorithm finds the best architecture for SNNs on the designated neuromorphic chip, without the need for extensive training. By making IoT devices smarter and more efficient, neuromorphic computing brings us closer to a world where our gadgets can perform intelligent tasks independently, enhancing convenience and privacy for users across the globe.
50

Topology and mass generation mechanisms in abelian gauge field theories

Bertrand, Bruno 09 September 2008 (has links)
Among a number of fundamental issues, the origin of inertial mass remains one of the major open problems in particle physics. Furthermore, topological effects related to non perturbative field configurations are poorly understood in those gauge theories of direct relevance to our physical universe. Motivated by such issues, this Thesis provides a deeper understanding for the appearance of topological effects in abelian gauge field theories, also in relation to the existence of a mass gap for the gauge interactions. These effects are not accounted for when proceeding through gauge fixings as is customary in the literature. The original Topological-Physical factorisation put forth in this work enables to properly identify in topologically massive gauge theories (TMGT) a topological sector which appears under formal limits within the Lagrangian formulation. Our factorisation then allows for a straightforward quantisation of TMGT, accounting for all the topological features inherent to such dynamics. Moreover dual actions are constructed while preserving the gauge symmetry also in the presence of dielectric couplings. All the celebrated mass generation mechanisms preserving the gauge symmetry are then recovered but now find their rightful place through a network of dualities, modulo the presence of topological terms generating topological effects. In particular a dual formulation of the famous Nielsen-Olesen vortices is constructed from TMGT. Within a novel physically equivalent picture, these topological defects are interpreted as dielectric monopoles.

Page generated in 0.1032 seconds