• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 55
  • 28
  • 26
  • 13
  • 12
  • 12
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 449
  • 82
  • 54
  • 49
  • 48
  • 45
  • 44
  • 44
  • 40
  • 39
  • 36
  • 35
  • 33
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Compressed Sensing and ΣΔ-Quantization

Feng, Joe-Mei 12 February 2018 (has links)
No description available.
12

Aggregated Learning: An Information Theoretic Framework to Learning with Neural Networks

Soflaei Shahrbabak, Masoumeh 04 November 2020 (has links)
Deep learning techniques have achieved profound success in many challenging real-world applications, including image recognition, speech recognition, and machine translation. This success has increased the demand for developing deep neural networks and more effective learning approaches. The aim of this thesis is to consider the problem of learning a neural network classifier and to propose a novel approach to solve this problem under the Information Bottleneck (IB) principle. Based on the IB principle, we associate with the classification problem a representation learning problem, which we call ``IB learning". A careful investigation shows there is an unconventional quantization problem that is closely related to IB learning. We formulate this problem and call it ``IB quantization". We show that IB learning is, in fact, equivalent to the IB quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a vector quantization approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework that we call ``Aggregated Learning (AgrLearn)", for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. In other words, AgrLearn can simultaneously optimize against multiple data samples which is different from standard neural networks. In this learning framework, two classes are introduced, ``deterministic AgrLearn (dAgrLearn)" and ``probabilistic AgrLearn (pAgrLearn)". We verify the effectiveness of this framework through extensive experiments on standard image recognition tasks. We show the performance of this framework over a real world natural language processing (NLP) task, sentiment analysis. We also compare the effectiveness of this framework with other available frameworks for the IB learning problem.
13

Study on Digital Filter Design and Coefficient Quantization

Zhang, Shu-Bin 27 July 2011 (has links)
In this thesis, the basic theory is convex optimization theory[1]. And we study the problem about how to transfer to convex optimization problem from the filter design problem. So that we can guarantee the solution is the globally optimized solution. As we get the filter coefficients, we quantize them, then to reduce the quantization bits of the filter coefficients by using the algorithm[2]. At last, we try to change the sequence of quantization, and compared the result with the result of the method[2].
14

Multiple Description Coding : proposed methods and video application

Moradi, Saeed 29 August 2007 (has links)
Multiple description coding (MDC) has received a lot of attention recently, and has been studied widely and extended to many demanding applications such as speech and video. MDC is a coding technique that generates correlated descriptions of the source stream for transmitting over a diversity system with several channels. The objective of this diversity system is to overcome channel impairments and provide more reliability. In the context of lossy source coding and quantization, a multiple description quantization system usually consists of multiple channels, side encoders to quantize the source samples and send over different channels, and side and central decoders to reconstruct the source. We propose two multiple description quantization schemes in order to design the codebooks and partitions of side and central quantizers of a multiple description system with two channels. The applied framework originated in the multiple description quantization via Gram-Schmidt orthogonalization approach. The basic idea of our proposed schemes is to minimize a Lagrangian cost function by an iterative technique which jointly designs side codebooks and partitions. Our proposed methods perform very closely to the optimum MD quantizer with considerably less complexity. We also propose a multiple description video coding technique motivated by human visual perception. We employ two simple parameters as a measure of the perceptual tolerance of discrete cosine transform (DCT) blocks against visual distortion. We duplicate the essential information such as motion vectors and some low-frequency DCT coefficients of prediction errors into each description, and split the remaining high-frequency DCT coefficients according to the calculated perceptual tolerance parameter. Our proposed technique has very low complexity and achieves superior performance compared to other similar techniques which do not consider perceptual distortion in the design problem. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2007-08-19 03:33:10.451
15

Fixed-Point Implementation of a Multistage Receiver

Cameron, Rick A. 13 January 1997 (has links)
This dissertation provides a study of synchronization and quantization issues in implementing a multistage receiver in fixed-point Digital Signal Processing (DSP) hardware. Current multistage receiver analysis has neglected the effects of synchronization and quantization; however, these effects can degrade system performance and therefore decrease overall system capacity. The first objective is to analyze and simulate various effects of synchronization in a multistage system. These effects include the effect of unsynchronized users on the bit error rate (BER) of synchronized users, and determining whether interference cancellation can be used to improve the synchronization time. This information is used to determine whether synchronization will limit overall system capacity. Both analytical and simulation techniques are presented. The second objective is to study the effects of quantization on the performance of the multistage receiver. A DSP implementation of a practical receiver will require a DSP chip with a fewer number of bits than the computer chips typically used in simulation of receiver performance. Therefore, the DSP implementation performs poorer than the simulation results predict. In addition, a fixed-point implementation is often favored over a floating-point implementation, due to the high processing requirements necessitated by the high chip rate. This further degrades performance because of the limited dynamic range available with fixed-point arithmetic. The performance of the receiver using a fixed-point implementation is analyzed and simulated. We also relate these topics to other important issues in the hardware implementation of multistage receivers, including the effects of frequency offsets at the receiver and developing a multiuser air protocol interface (API). This dissertation represents a contribution to the ongoing hardware development effort in multistage receivers at Virginia Tech. / Ph. D.
16

First and Second Quantization Theories of Parastatistics

Vo-Dai, Thien 07 1900 (has links)
<p> Although usually only two kinds of statistics, namely Bose-Einstein and Fermi-Dirac statistics, are considered in Quantum Mechanics and in Quantum Field Theory, other kinds of statistics, called collectively parastatistics, are conceivable. We critically review theoretical studies of parastatistics to date, point out and clarify several confusions.</p> <p> We first study the "proofs" so far proposed for the symmetrization postulate which excludes parastatistics, emphasizing their ad hoc nature. Then, after exploring in detail the structure of the quantum mechanical theory of paraparticles, we clarify some confusions concerning the compatibility of parastatistics with the so-called cluster property, which has been an issue of controversy for several years. We show, following a suggestion of Greenberg, that the quantum mechanical theory of paraparticles can be formulated in terms of density matrix compatibly with the cluster property. We also discuss such topics as selection rules for systems with variable numbers of paraparticles, the connection between statistics and permutation characters, and the classification of paraparticles.</p> <p> For the quantum field theory of paraparticles, we study discrete representations of the para-commutation relations and illustrate in detail Greenberg and Messiah's theorem concerning Green's ansatzes. Also, fundamental topics such as the spin-statistic theorem, the TCP theorem and the observability of parafields are discussed on the basis of Green's ansatzes. Finally, we point out that the so-called particle permutation operators do not always define multi-dimensional representations of the permutation group both in first and second quantization theories. This questions the validity of the correspondence between the two theories which has recently been proposed.</p> / Thesis / Doctor of Philosophy (PhD)
17

Quantization of Real-Valued Attributes for Data Mining

Qaiser, Elizae 11 October 2001 (has links)
No description available.
18

Sequential Scalar Quantization of Two Dimensional Vectors in Polar and Cartesian Coordinates

WU, HUIHUI 08 1900 (has links)
This thesis addresses the design of quantizers for two-dimensional vectors, where the scalar components are quantized sequentially. Specifically, design algorithms for unrestricted polar quantizers (UPQ) and successively refinable UPQs (SRUPQ) for vectors in polar coordinates are proposed. Additionally, algorithms for the design of sequential scalar quantizers (SSQ) for vectors with correlated components in Cartesian coordinates are devised. Both the entropy-constrained (EC) and fixed-rate (FR) cases are investigated. The proposed UPQ and SRUPQ design algorithms are developed for continuous bivariate sources with circularly symmetric densities. They are globally optimal for the class of UPQs/SRUPQs with magnitude thresholds confined to a finite set. The time complexity for the UPQ design is $O(K^2 + KP_{max})$ in the EC case, respectively $O(KN^2)$ in the FR case, where $K$ is the size of the set from which the magnitude thresholds are selected, $P_{max}$ is an upper bound for the number of phase levels corresponding to a magnitude bin, and $N$ is the total number of quantization bins. The time complexity of the SRUPQ design is $O(K^3P_{max})$ in the EC case, respectively $O(K^2N^{'2}P_{max})$ in the FR case, where $N'$ denotes the ratio between the number of bins of the fine UPQ and the coarse UPQ. The SSQ design is considered for finite-alphabet correlated sources. The proposed algorithms are globally optimal for the class of SSQs with convex cells, i.e, where each quantizer cell is the intersection of the source alphabet with an interval of the real line. The time complexity for both EC and FR cases amounts to $O(K_1^2K_2^2)$, where $K_1$ and $K_2$ are the respective sizes of the two source alphabets. It is also proved that, by applying the proposed SSQ algorithms to finite, uniform discretizations of correlated sources with continuous joint probability density function, the performance approaches that of the optimal SSQs with convex cells for the original sources as the accuracy of the discretization increases. The proposed algorithms generally rely on solving the minimum-weight path (MWP) problem in the EC case, respectively the length-constrained MWP problem or a related problem in the FR case, in a weighted directed acyclic graph (WDAG) specific to each problem. Additional computations are needed in order to evaluate the edge weights in this WDAG. In particular, in the EC-SRUPQ case, this additional work includes solving the MWP problem between multiple node pairs in some other WDAG. In the EC-SSQ (respectively, FR-SSQ) case, the additional computations consist of solving the MWP (respectively, length-constrained MWP) problem for a series of other WDAGs. / Dissertation / Doctor of Philosophy (PhD)
19

Fedosov Quantization and Perturbative Quantum Field Theory

Collini, Giovanni 11 May 2017 (has links) (PDF)
Fedosov has described a geometro-algebraic method to construct in a canonical way a deformation of the Poisson algebra associated with a finite-dimensional symplectic manifold (\\\"phase space\\\"). His algorithm gives a non-commutative, but associative, product (a so-called \\\"star-product\\\") between smooth phase space functions parameterized by Planck\\\'s constant ℏ, which is treated as a deformation parameter. In the limit as ℏ goes to zero, the star product commutator goes to ℏ times the Poisson bracket, so in this sense his method provides a quantization of the algebra of classical observables. In this work, we develop a generalization of Fedosov\\\'s method which applies to the infinite-dimensional symplectic \\\"manifolds\\\" that occur in Lagrangian field theories. We show that the procedure remains mathematically well-defined, and we explain the relationship of this method to more standard perturbative quantization schemes in quantum field theory.
20

Progressive Lossy-to-Lossless Compression of DNA Microarray Images

Hernandez-Cabronero, Miguel, Blanes, Ian, Pinho, Armando J., Marcellin, Michael W., Serra-Sagrista, Joan 05 1900 (has links)
The analysis techniques applied to DNA microarray images are under active development. As new techniques become available, it will be useful to apply them to existing microarray images to obtain more accurate results. The compression of these images can be a useful tool to alleviate the costs associated to their storage and transmission. The recently proposed Relative Quantizer (RQ) coder provides the most competitive lossy compression ratios while introducing only acceptable changes in the images. However, images compressed with the RQ coder can only be reconstructed with a limited quality, determined before compression. In this work, a progressive lossy-to-lossless scheme is presented to solve this problem. First, the regular structure of the RQ intervals is exploited to define a lossy-to-lossless coding algorithm called the Progressive RQ (PRQ) coder. Second, an enhanced version that prioritizes a region of interest, called the PRQ-region of interest (ROI) coder, is described. Experiments indicate that the PRQ coder offers progressivity with lossless and lossy coding performance almost identical to the best techniques in the literature, none of which is progressive. In turn, the PRQ-ROI exhibits very similar lossless coding results with better rate-distortion performance than both the RQ and PRQ coders.

Page generated in 0.1135 seconds