Spelling suggestions: "subject:"quantization"" "subject:"cuantization""
141 |
Quantization of symplectic transformations on manifolds with conical singularitiesNazaikinskii, Vladimir, Schulze, Bert-Wolfgang, Sternin, Boris, Shatalov, Victor January 1997 (has links)
The structure of symplectic (canonical) transformations on manifolds with conical singularities is established. The operators associated with these transformations are defined in the weight spaces and their properties investigated.
|
142 |
Non-Abelian reduction in deformation quantizationFedosov, Boris January 1997 (has links)
We consider a G-invariant star-product algebra A on a symplectic manifold (M,ω) obtained by a canonical construction of deformation quantization. Under assumptions of the classical Marsden-Weinstein theorem we define a reduction of the algebra A with respect to the G-action. The reduced algebra turns out to be isomorphic to a canonical star-product algebra on the reduced phase space B. In other words, we show that the reduction commutes with the canonical G-invariant
deformation quantization. A similar statement in the framework of geometric quantization is known as the Guillemin-Sternberg conjecture (by now completely proved).
|
143 |
The index of quantized contact transformations on manifolds with conical singularitiesSchulze, Bert-Wolfgang, Nazaikinskii, Vladimir, Sternin, Boris January 1998 (has links)
The quantization of contact transformations of the cosphere bundle over a manifold with conical singularities is described. The index of Fredholm operators given by this quantization is calculated. The answer is given in terms of the Epstein-Melrose contact degree and the conormal symbol of the corresponding operator.
|
144 |
Tensorial spacetime geometries carrying predictive, interpretable and quantizable matter dynamicsRivera Hernández, Sergio January 2012 (has links)
Which tensor fields G on a smooth manifold M can serve as a spacetime structure? In the first part of this thesis, it is found that only a severely restricted class of tensor fields can provide classical spacetime geometries, namely those that can carry predictive, interpretable and quantizable matter dynamics. The obvious dependence of this characterization of admissible tensorial spacetime geometries on specific matter is not a weakness, but rather presents an insight: it was Maxwell theory that justified Einstein to promote Lorentzian manifolds to the status of a spacetime geometry. Any matter that does not mimick the structure of Maxwell theory, will force us to choose another geometry on which the matter dynamics of interest are predictive, interpretable and quantizable.
These three physical conditions on matter impose three corresponding algebraic conditions on the totally symmetric contravariant coefficient tensor field P that determines the principal symbol of the matter field equations in terms of the geometric tensor G: the tensor field P must be hyperbolic, time-orientable and energy-distinguishing. Remarkably, these physically necessary conditions on the geometry are mathematically already sufficient to realize all kinematical constructions familiar from Lorentzian geometry, for precisely the same structural reasons. This we were able to show employing a subtle interplay of convex analysis, the theory of partial differential equations and real algebraic geometry.
In the second part of this thesis, we then explore general properties of any hyperbolic, time-orientable and energy-distinguishing tensorial geometry. Physically most important are the construction of freely falling non-rotating laboratories, the appearance of admissible modified dispersion relations to particular observers, and the identification of a mechanism that explains why massive particles that are faster than some massless particles can radiate off energy until they are slower than all massless particles in any hyperbolic, time-orientable and energy-distinguishing geometry.
In the third part of the thesis, we explore how tensorial spacetime geometries fare when one wants to quantize particles and fields on them. This study is motivated, in part, in order to provide the tools to calculate the rate at which superluminal particles radiate off energy to become infraluminal, as explained above. Remarkably, it is again the three geometric conditions of hyperbolicity, time-orientability and energy-distinguishability that allow the quantization of general linear electrodynamics on an area metric spacetime and the quantization of massive point particles obeying any admissible dispersion relation. We explore the issue of field equations of all possible derivative order in rather systematic fashion, and prove a practically most useful theorem that determines Dirac algebras allowing the reduction of derivative orders.
The final part of the thesis presents the sketch of a truly remarkable result that was obtained building on the work of the present thesis. Particularly based on the subtle duality maps between momenta and velocities in general tensorial spacetimes, it could be shown that gravitational dynamics for hyperbolic, time-orientable and energy distinguishable geometries need not be postulated, but the formidable physical problem of their construction can be reduced to a mere mathematical task: the solution of a system of homogeneous linear partial differential equations. This far-reaching physical result on modified gravity theories is a direct, but difficult to derive, outcome of the findings in the present thesis.
Throughout the thesis, the abstract theory is illustrated through instructive examples. / Welche Tensorfelder G auf einer glatten Mannigfaltigkeit M können eine Raumzeit-Geometrie beschreiben? Im ersten Teil dieser Dissertation wird es gezeigt, dass nur stark eingeschränkte Klassen von Tensorfeldern eine Raumzeit-Geometrie darstellen können, nämlich Tensorfelder, die eine prädiktive, interpretierbare und quantisierbare Dynamik für Materiefelder ermöglichen. Die offensichtliche Abhängigkeit dieser Charakterisierung
erlaubter tensorieller Raumzeiten von einer spezifischen Materiefelder-Dynamik ist keine Schwäche der Theorie, sondern ist letztlich genau das Prinzip, das die üblicherweise betrachteten Lorentzschen Mannigfaltigkeiten auszeichnet: diese stellen die metrische Geometrie dar, welche die Maxwellsche Elektrodynamik prädiktiv, interpretierbar und quantisierbar macht. Materiefeld-Dynamiken, welche die kausale Struktur von Maxwell-Elektrodynamik nicht respektieren, zwingen uns, eine andere Geometrie auszuwählen, auf der die Materiefelder-Dynamik aber immer noch prädiktiv, interpretierbar und quantisierbar sein muss.
Diesen drei Voraussetzungen an die Materie entsprechen drei algebraische Voraussetzungen an das total symmetrische kontravariante Tensorfeld P, welches das Prinzipalpolynom der Materiefeldgleichungen (ausgedrückt durch das grundlegende Tensorfeld G) bestimmt: das Tensorfeld P muss hyperbolisch, zeitorientierbar und energie-differenzierend sein. Diese drei notwendigen Bedingungen an die Geometrie genügen, um alle aus der Lorentzschen Geometrie bekannten kinematischen Konstruktionen zu realisieren. Dies zeigen wir im ersten Teil der vorliegenden Arbeit unter Verwendung eines teilweise recht subtilen Wechselspiels zwischen konvexer Analysis, der Theorie partieller Differentialgleichungen und reeller algebraischer Geometrie.
Im zweiten Teil dieser Dissertation erforschen wir allgemeine Eigenschaften aller solcher hyperbolischen, zeit-orientierbaren und energie-differenzierenden Geometrien. Physikalisch wichtig sind der Aufbau von frei fallenden und nicht rotierenden Laboratorien, das Auftreten modifizierter Energie-Impuls-Beziehungen und die Identifizierung eines Mechanismus, der erklärt, warum massive Teilchen, die sich schneller als einige masselosse Teilchen bewegen, Energie abstrahlen können, aber nur bis sie sich langsamer als alle masselossen Teilchen bewegen.
Im dritten Teil der Dissertation ergründen wir die Quantisierung von Teilchen und Feldern auf tensoriellen Raumzeit-Geometrien, die die obigen physikalischen Bedingungen erfüllen. Eine wichtige Motivation dieser Untersuchung ist es, Techniken zur Berechnung der Zerfallsrate von Teilchen zu berechnen, die sich schneller als langsame masselose Teilchen bewegen. Wir finden, dass es wiederum die drei zuvor im klassischen Kontext identifizierten Voraussetzungen (der Hyperbolizität, Zeit-Orientierbarkeit und Energie-Differenzierbarkeit)
sind, welche die Quantisierung allgemeiner linearer Elektrodynamik auf einer flächenmetrischen Raumzeit und die Quantizierung massiver Teilchen, die eine physikalische Energie-Impuls-Beziehung respektieren, erlauben. Wir erkunden auch systematisch, wie man Feldgleichungen aller Ableitungsordnungen generieren kann und beweisen einen Satz, der verallgemeinerte Dirac-Algebren bestimmt und die damit Reduzierung des Ableitungsgrades einer physikalischen Materiefeldgleichung ermöglicht.
Der letzte Teil der vorliegenden Schrift skizziert ein bemerkenswertes Ergebnis, das mit den in dieser Dissertation dargestellten Techniken erzielt wurde. Insbesondere aufgrund der hier identifizierten dualen Abbildungen zwischen Teilchenimpulsen und -geschwindigkeiten auf allgemeinen tensoriellen Raumzeiten war es möglich zu zeigen, dass man die Gravitationsdynamik für hyperbolische, zeit-orientierbare und energie-differenzierende Geometrien nicht postulieren muss, sondern dass sich das Problem ihrer Konstruktion auf eine rein mathematische Aufgabe reduziert: die Lösung eines homogenen linearen Differentialgleichungssystems. Dieses weitreichende Ergebnis über modifizierte Gravitationstheorien ist eine direkte (aber schwer herzuleitende) Folgerung der Forschungsergebnisse dieser Dissertation.
Die abstrakte Theorie dieser Doktorarbeit wird durch mehrere instruktive Beispiele illustriert.
|
145 |
The Relative Importance of Input Encoding and Learning Methodology on Protein Secondary Structure PredictionClayton, Arnshea 09 June 2006 (has links)
In this thesis the relative importance of input encoding and learning algorithm on protein secondary structure prediction is explored. A novel input encoding, based on multidimensional scaling applied to a recently published amino acid substitution matrix, is developed and shown to be superior to an arbitrary input encoding. Both decimal valued and binary input encodings are compared. Two neural network learning algorithms, Resilient Propagation and Learning Vector Quantization, which have not previously been applied to the problem of protein secondary structure prediction, are examined. Input encoding is shown to have a greater impact on prediction accuracy than learning methodology with a binary input encoding providing the highest training and test set prediction accuracy.
|
146 |
Transmission of vector quantization over a frequency-selective Rayleigh fading CDMA channelNguyen, Son Xuan 19 December 2005
Recently, the transmission of vector quantization (VQ) over a code-division multiple access (CDMA) channel has received a considerable attention in research community. The complexity of the optimal decoding for VQ in CDMA communications is prohibitive for implementation, especially for systems with a medium or large number of users. A suboptimal approach to VQ decoding over a CDMA channel, disturbed by additive white Gaussian noise (AWGN), was recently developed. Such a suboptimal decoder is built from a soft-output multiuser detector (MUD), a soft bit estimator and the optimal soft VQ decoders of individual users. <p>Due to its lower complexity and good performance, such a decoding scheme is an attractive alternative to the complicated optimal decoder. It is necessary to extend this decoding scheme for a frequency-selective Rayleigh fading CDMA channel, a channel model typically seen in mobile wireless communications. This is precisely the objective of this thesis. <p>Furthermore, the suboptimal decoders are obtained not only for binary phase shift keying (BPSK), but also for M-ary pulse amplitude modulation (M-PAM). This extension offers a flexible trade-off between spectrum efficiency and performance of the systems. In addition, two algorithms based on distance measure and reliability processing are
introduced as other alternatives to the suboptimal decoder. <p>Simulation results indicate that the suboptimal decoders studied in this thesis also performs very well over a frequency-selective Rayleigh fading CDMA channel.
|
147 |
A Cost Shared Quantization Algorithm and its Implementation for Multi-Standard Video CODECS2012 December 1900 (has links)
The current trend of digital convergence creates the need for the video encoder and decoder system, known as codec in short, that should support multiple video standards on a single platform. In a modern video codec, quantization is a key unit used for video compression. In this thesis, a generalized quantization algorithm and hardware implementation is presented to compute quantized coefficient for six different video codecs including the new developing codec High Efficiency Video Coding (HEVC). HEVC, successor to H.264/MPEG-4 AVC, aims to substantially improve coding efficiency compared to AVC High Profile. The thesis presents a high performance circuit shared architecture that can perform the quantization operation for HEVC, H.264/AVC, AVS, VC-1, MPEG- 2/4 and Motion JPEG (MJPEG). Since HEVC is still in drafting stage, the architecture was designed in such a way that any final changes can be accommodated into the design. The proposed quantizer architecture is completely division free as the division operation is replaced by multiplication, shift and addition operations. The design was implemented on FPGA and later synthesized in CMOS 0.18 μm technology. The results show that the proposed design satisfies the requirement of all codecs with a maximum decoding capability of 60 fps at 187.3 MHz for Xilinx Virtex4 LX60 FPGA of a 1080p HD video. The scheme is also suitable for low-cost implementation in modern multi-codec systems.
|
148 |
Progressive Lossless Image Compression Using Image Decomposition and Context QuantizationZha, Hui 23 January 2007 (has links)
Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing.
|
149 |
Optimal Dither and Noise Shaping in Image ProcessingChristou, Cameron 11 August 2008 (has links)
Dithered quantization and noise shaping is well known in the audio community. The image processing community seems to be aware of this same theory only in bits and pieces, and frequently under conflicting terminology. This thesis attempts to show that dithered quantization of images is an extension of dithered quantization of audio signals to higher dimensions.
Dithered quantization, or ``threshold modulation'', is investigated as a means of suppressing undesirable visual artifacts during the digital quantization, or requantization, of an image. Special attention is given to the statistical moments of the resulting error signal. Afterwards, noise shaping, or ``error diffusion'' methods are considered to try to improve on the dithered quantization technique.
We also take time to develop the minimum-phase property for two-dimensional systems. This leads to a natural extension of Jensen's Inequality and the Hilbert transform relationship between the log-magnitude and phase of a two-dimensional system. We then describe how these developments are relevant to image processing.
|
150 |
Implementation and Evaluation of Image Retrieval Method Utilizing Geographic Location MetadataLundstedt, Magnus January 2009 (has links)
Multimedia retrieval systems are very important today with millions of content creators all over the world generating huge multimedia archives. Recent developments allows for content based image and video retrieval. These methods are often quite slow, especially if applied on a library of millions of media items. In this research a novel image retrieval method is proposed, which utilizes spatial metadata on images. By finding clusters of images based on their geographic location, the spatial metadata, and combining this information with existing content- based image retrieval algorithms, the proposed method enables efficient presentation of high quality image retrieval results to system users. Clustering methods considered include Vector Quantization, Vector Quantization LBG and DBSCAN. Clustering was performed on three different similarity measures; spatial metadata, histogram similarity or texture similarity. For histogram similarity there are many different distance metrics to use when comparing histograms. Euclidean, Quadratic Form and Earth Mover’s Distance was studied. As well as three different color spaces; RGB, HSV and CIE Lab.
|
Page generated in 0.0695 seconds