• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 55
  • 28
  • 26
  • 13
  • 12
  • 12
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 449
  • 82
  • 54
  • 49
  • 48
  • 45
  • 44
  • 44
  • 40
  • 39
  • 36
  • 35
  • 33
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Optimal Points for a Probability Distribution on a Nonhomogeneous Cantor Set

Roychowdhury, Lakshmi 1975- 02 October 2013 (has links)
The objective of my thesis is to find optimal points and the quantization error for a probability measure defined on a Cantor set. The Cantor set, we have considered in this work, is generated by two self-similar contraction mappings on the real line with distinct similarity ratios. Then we have defined a nonhomogeneous probability measure, the support of which lies on the Cantor set. For such a probability measure first we have determined the n-optimal points and the nth quantization error for n = 2 and n = 3. Then by some other lemmas and propositions we have proved a theorem which gives all the n-optimal points and the nth quantization error for all positive integers n. In addition, we have given some properties of the optimal points and the quantization error for the probability measure. In the end, we have also given a list of n-optimal points and error for some positive integers n. The result in this thesis is a nonhomogeneous extension of a similar result of Graf and Luschgy in 1997. The techniques in my thesis could be extended to discretise any continuous random variable with another random variable with finite range.
32

[Sigma Delta] Quantization with the hexagon norm in C /

Zichy, Michael Andrew. January 2006 (has links) (PDF)
Thesis (M.A.)--University of North Carolina at Wilmington, 2006. / Includes bibliographical references (leaf: [41])
33

Measurement Quantization in Compressive Imaging

Lin, Yuzhang, Lin, Yuzhang January 2016 (has links)
In compressive imaging the measurement quantization and its impact on the overall system performance is an important problem. This work considers several challenges that derive from quantization of compressive measurements. We investigate the design of scalar quantizer (SQ), vector quantizer (VQ), and tree-structured vector quantizer (TSVQ) for information-optimal compressive imaging. The performance of these quantizer designs is quantified for a variety of compression rates and measurement signal-to-noise-ratio (SNR) using simulation studies. Our simulation results show that in the low SNR regime a low bit-depth (3 bit per measurement) SQ is sufficient to minimize the degradation due to measurement quantization. However, in mid-to-high SNR regime, quantizer design requires higher bit-depth to preserve the information in the measurements. Simulation results also confirm the superior performance of VQ over SQ. As expected, TSVQ provides a good tradeoff between complexity and performance, bounded by VQ and SQ designs on either side of performance/complexity limits. In compressive image the size of final measurement data (i.e. in bits) is also an important system design metric. In this work, we also optimize the compressive imaging system using this design metric, and investigate how to optimally allocate the number of measurement and bits per measurement, i.e. the rate allocation problem. This problem is solved using both an empirical data driven approach and a model-based approach. As a function of compression rate (bits per pixel), our simulation results show that compressive imaging can outperform traditional (non-compressive) imaging followed by image compression (JPEG 2000) in low-to-mid SNR regime. However, in high SNR regime traditional imaging (with image compression) offers a higher image fidelity compare to compressive imaging for a given data rate. Compressive imaging using blockwise measurements is partly limited due to its inability to perform global rate allocation. We also develop an optimal minimum mean-square error (MMSE) reconstruction algorithm for quantized compressed measurements. The algorithm employs Monte-Carlo Markov Chain (MCMC) sampling technique to estimate the posterior mean. Simulation results show significant improvement over approximate MMSE algorithms.
34

An Ordinary Differential Equation Based Model For Clustering And Vector Quantization

Cheng, Jie 01 January 2009 (has links) (PDF)
This research focuses on the development of a novel adaptive dynamical system approach to vector quantization or clustering based on only ordinary differential equations (ODEs) with potential for a real-time implementation. The ODE-based approach has an advantage in making it possible real-time implementation of the system with either electronic or photonic analog devices. This dynamical system consists of a set of energy functions which create valleys for representing clusters. Each valley represents a cluster of similar input patterns. The proposed system includes a dynamic parameter, called vigilance parameter. This parameter approximately reflects the radius of the generated valleys. Through several examples of different pattern clusters, it is shown that the model can successfully quantize/cluster these types of input patterns. Also, a hardware implementation by photonic and/or electronic analog devices is given In addition, we analyze and study stability of our dynamical system. By discovering the equilibrium points for certain input patterns and analyzing their stability, we have shown the quantizing behavior of the system with respect to its parameters. We also extend our model to include competition mechanism and vigilance dynamics. The competition mechanism causes only one label to be assigned to a group of patterns. The vigilance dynamics adjust vigilance parameter so that the cluster size or the quantizing resolution can be adaptive to the density and distribution of the input patterns. This reduces the burden of re-tuning the vigilance parameter for a given input pattern set and also better represents the input pattern space. The vigilance parameter approximately reflects the radius of the generated valley for each cluster. Making this parameter dynamic allows the bigger cluster to have a bigger radius and as a result a better cluster. Furthermore, an alternative dynamical system to our proposed system is also introduced. This system utilizes sigmoid and competitive functions. Although the results of this system are encouraging, the use of sigmoid function makes analyze and study stability of the system extremely difficult.
35

Vector Quantization of Deep Convolutional Neural Networks with Learned Codebook

Yang, Siyuan 16 February 2022 (has links)
Deep neural networks (DNNs), particularly convolutional neural networks (CNNs), have been widely applied in the many fields, such as computer vision, natural language processing, speech recognition and etc. Although DNNs achieve dramatic accuracy improvements in these real-world tasks, they require significant amounts of resources (e.g., memory, energy, storage, bandwidth and computation resources). This limits the application of these networks on resource-constrained systems, such as mobile and edge devices. A large body of literature has been proposed to addresses this problem from the perspective of compressing DNNs while preserving their performance. In this thesis, we focus on compressing deep CNNs based on vector quantization techniques. The first part of this thesis summarizes some basic concepts in machine learning and popular techniques on model compression, including pruning, quantization, low-rank factorization and knowledge distillation approaches. Our main interest is quantization techniques, which compress networks by reducing the precision of parameters. Full-precision weights, activations and even gradients in networks can be quantized to 16-bit floating point numbers, 8-bit integers, or even binary numbers. Despite a possible performance degradation, quantization can greatly reduce the model size while maintaining model accuracy. In the second part of this thesis, we propose a novel vector quantization approach, which we refer to as Vector Quantization with Learned Codebook, or VQLC, for CNNs. Rather than performing scalar quantization, we choose vector quantization that can simultaneously quantize multiple weights at once. Instead of taking a pretraining/clustering approach as in most works, in VQLC, the codebook for quantization are learned together with neural network training from scratch. For the forward pass, the traditional convolutional filters are replaced by the convex combinations of a set of learnable codewords. During inference, the compressed model will be represented by a small-sized codebook and a set of indices, resulting in a significant reduction of model size while preserving the network's performance. Lastly, we validate our approach by quantizing multiple modern CNNs on several popular image classification benchmarks and compare with state-of-the-art quantization techniques. Our experimental results show that VQLC demonstrates at least comparable and often superior performance to the existing schemes. In particular, VQLC demonstrates significant advantages over the existing approaches on wide networks at the high rate of compression.
36

Receiver Implementations for a CDMA Cellular System

Aliftiras, George 01 July 1996 (has links)
The communications industry is experiencing an explosion in the demand for personal communications services (PCS). Several digital technologies have been proposed to replace overburdened analog systems. One system that has gained increasing popularity in North America is a 1.25 MHz Code Division Multiple Access (CDMA) system (IS-95). In CDMA systems, multiple access interference limits the capacity of any system using conventional single user correlation or matched filter receivers. Previous research has shown that multiuser detection receivers that employ interference cancellation techniques can significantly improve the capacity of a CDMA system. This thesis studies two such structures: the successive interference cancellation scheme and the parallel interference cancellation scheme. These multiuser receivers are integrated into an IS-95 compatible receiver model which is simulated in software. This thesis develops simulation software that simulates IS-95 with conventional and multiuser receivers in multipath channels and when near-far conditions exist. Simulation results present the robustness of multiuser receivers to near-far in a practical system. In addition to multiuser implemenations, quantization effects from finite bit analog to digital converters (ADC) in CDMA systems will also be simulated. / Master of Science
37

Exploring Accumulated Gradient-Based Quantization and Compression for Deep Neural Networks

Gaopande, Meghana Laxmidhar 29 May 2020 (has links)
The growing complexity of neural networks makes their deployment on resource-constrained embedded or mobile devices challenging. With millions of weights and biases, modern deep neural networks can be computationally intensive, with large memory, power and computational requirements. In this thesis, we devise and explore three quantization methods (post-training, in-training and combined quantization) that quantize 32-bit floating-point weights and biases to lower bit width fixed-point parameters while also achieving significant pruning, leading to model compression. We use the total accumulated absolute gradient over the training process as the indicator of importance of a parameter to the network. The most important parameters are quantized by the smallest amount. The post-training quantization method sorts and clusters the accumulated gradients of the full parameter set and subsequently assigns a bit width to each cluster. The in-training quantization method sorts and divides the accumulated gradients into two groups after each training epoch. The larger group consisting of the lowest accumulated gradients is quantized. The combined quantization method performs in-training quantization followed by post-training quantization. We assume storage of the quantized parameters using compressed sparse row format for sparse matrix storage. On LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), post-training quantization achieves 7.62x, 10.87x, 6.39x and 12.43x compression, in-training quantization achieves 22.08x, 21.05x, 7.95x and 12.71x compression and combined quantization achieves 57.22x, 50.19x, 13.15x and 13.53x compression, respectively. Our methods quantize at the cost of accuracy, and we present our work in the light of the accuracy-compression trade-off. / Master of Science / Neural networks are being employed in many different real-world applications. By learning the complex relationship between the input data and ground-truth output data during the training process, neural networks can predict outputs on new input data obtained in real time. To do so, a typical deep neural network often needs millions of numerical parameters, stored in memory. In this research, we explore techniques for reducing the storage requirements for neural network parameters. We propose software methods that convert 32-bit neural network parameters to values that can be stored using fewer bits. Our methods also convert a majority of numerical parameters to zero. Using special storage methods that only require storage of non-zero parameters, we gain significant compression benefits. On typical benchmarks like LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), our methods can achieve up to 57.22x, 50.19x, 13.15x and 13.53x compression respectively. Storage benefits are achieved at the cost of classification accuracy, and we present our work in the light of the accuracy-compression trade-off.
38

Quantization Dimension for Probability Definitions

Lindsay, Larry J. 12 1900 (has links)
The term quantization refers to the process of estimating a given probability by a discrete probability supported on a finite set. The quantization dimension Dr of a probability is related to the asymptotic rate at which the expected distance (raised to the rth power) to the support of the quantized version of the probability goes to zero as the size of the support is allowed to go to infinity. This assumes that the quantized versions are in some sense ``optimal'' in that the expected distances have been minimized. In this dissertation we give a short history of quantization as well as some basic facts. We develop a generalized framework for the quantization dimension which extends the current theory to include a wider range of probability measures. This framework uses the theory of thermodynamic formalism and the multifractal spectrum. It is shown that at least in certain cases the quantization dimension function D(r)=Dr is a transform of the temperature function b(q), which is already known to be the Legendre transform of the multifractal spectrum f(a). Hence, these ideas are all closely related and it would be expected that progress in one area could lead to new results in another. It would also be expected that the results in this dissertation would extend to all probabilities for which a quantization dimension function exists. The cases considered here include probabilities generated by conformal iterated function systems (and include self-similar probabilities) and also probabilities generated by graph directed systems, which further generalize the idea of an iterated function system.
39

Nonattribution Properties of JPEG Quantization Tables

Tuladhar, Punnya 17 December 2010 (has links)
In digital forensics, source camera identification of digital images has drawn attention in recent years. An image does contain information of its camera and/or editing software somewhere in it. But the interest of this research is to find manufacturers (henceforth will be called make and model) of a camera using only the header information, such as quantization table and huffman table, of the JPEG encoding. Having done research on around 110, 000 images, we reached to state that "For all practical purposes, using quantization and huffman tables alone to predict a camera make and model isn't a viable approach". We found no correlation between quantization and huffman tables of images and makes of camera. Rather, quantization or huffman table is determined by the quality factors like resolution, RGB values, intensity etc.of an image and standard settings of the camera.
40

Construção geométrica de \"star-product\" integral em espaços simpléticos simétricos não compactos / Geometric construction of \"star-product\" integral on symplectic symmetric spaces not compact

Barrios, John Beiro Moreno 13 March 2013 (has links)
A quantização geométrica e um método desenvolvido para prover uma construção geométrica que relacione a mecânica clássica com a quântica. O primeiro passo consiste em apresentar uma forma simplética, \'omega\'!, sobre uma variedade simplética, M, como a forma curvatura da conexão abla de um brado linear, L, sobre M. As funções sobre M operam como seções de L. Mas o espaço de todas as seções é grande demais. Queremos considerar seções constantes em certa direção, com respeito a derivada covariante dada por abla, e para isso precisamos o conceito de polarizações, essas seções são chamadas de seções polarizadas. Para obter uma estrutura de espaco de Hilbert nestas seções, precisamos de certos objetos chamados de meias densidades. Além disso, também temos um empareamento sesquilinear entre seções de polarizações diferentes. Neste trabalho, primeiramente consideraremos o empareamento para seções polarizadas adaptadas a polarizações reais não transversais, como método para obter aplicações integrais entre estes espaços de Hilbert que em combinação com a convolução do par grupóide M x \' M BARRA\', pode definir um produto integral de funções definidas na variedade simplética. Este produto, no caso do plano euclidiano e do plano de Bieliavsky, coincide com produto de Weyl integral e o produto de Bieliavsky, respectivamente. Jáa no caso do plano hiperbólico, este tipo de polarizações reais não são transversais nem são não transversais, dessa forma, escolhemos o empareamento entre uma polarização real e uma polarização holomorfa do par grupóide, as quais são transversais, para obter um produto integral no plano hiperbólico, que no caso do plano euclidiano e o produto de Weyl / The geometric quantization is a method developed to provide a geometrical construction relating classical to quantum mechanics. The first step consists of realizing the symplectic form, \'omega\', on a symplectic manifold, M, as the curvature form of a line bundle, L, over M. The functions on M then operate as sections of L. However, the space of all sections of L is too large. One wants to consider sections which are constant in certain directions (polarized sections) and for that one needs to introduce the concept of a polarization. To get a Hilbert space structure on the polarized sections, one needs to consider objects known as half densities. In this work, first we consider a sesquilinear pairing between objects associated to certain different polarizations, which are nontransverse real polarizations, to obtain integral applications between their associated Hilbert spaces, and to use the convolution of the pair groupoid M x \' M BARRA\' to obtain an integral product of functions on M. In the euclidian plane case, we recover the integral Weyl product and, in the Bieliavsky plane case, we obtain the Bieliavsky product. On the other hand, for the hyperbolic plane, such real polarizations are neither transverse nor nontransverse, so we use the pairing between a real polarization and a holomorphic polarization, which are transverse polarizations on the pair groupoid, to obtain an integral product of functions on the hyperbolic plane. This same procedure, in the euclidian plane case, also produces the integral Weyl product

Page generated in 0.1232 seconds