• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 242
  • 55
  • 28
  • 26
  • 13
  • 12
  • 12
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 451
  • 82
  • 54
  • 50
  • 48
  • 45
  • 44
  • 44
  • 41
  • 40
  • 36
  • 35
  • 34
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Application de la compression à la tractographie en imagerie par résonance magnétique de diffusion

Presseau, Caroline January 2014 (has links)
Ce mémoire présente un nouvel algorithme de compression de fibres développé spécifiquement pour la tractographie. Validé et testé sur un large éventail d’algorithmes et de paramètres de tractographie, celui-ci présente trois grandes étapes : la linéarisation, la quantization ainsi que l’encodage. Les concepts clés de l’imagerie par résonance magnétique de diffusion (IRMd) et de la compression sont également introduits afin de faciliter la compréhension du lecteur.
292

Analysis of Fix‐point Aspects for Wireless Infrastructure Systems

Grill, Andreas, Englund, Robin January 2009 (has links)
A large amount of today’s telecommunication consists of mobile and short distance wireless applications, where the effect of the channel is unknown and changing over time, and thus needs to be described statistically. Therefore the received signal can not be accurately predicted and has to be estimated. Since telecom systems are implemented in real-time, the hardware in the receiver for estimating the sent signal can for example be based on a DSP where the statistic calculations are performed. A fixed-point DSP with a limited number of bits and a fixed binary point causes larger quantization errors compared to floating point operations with higher accuracy. The focus on this thesis has been to build a library of functions for handling fixed-point data. A class that can handle the most common arithmetic operations and a least squares solver for fixed-point have been implemented in MATLAB code. The MATLAB Fixed-Point Toolbox could have been used to solve this task, but in order to have full control of the algorithms and the fixed-point handling an independent library was created. The conclusion of the simulation made in this thesis is that the least squares result are depending more on the number of integer bits then the number of fractional bits. / En stor del av dagens telekommunikation består av mobila trådlösa kortdistanstillämpningar där kanalens påverkan är okänd och förändras över tid. Signalen måste därför beskrivas statistiskt, vilket gör att den inte kan bestämmas exakt, utan måste estimeras. Eftersom telekomsystem arbetar i realtid består hårdvaran i mottagaren av t.ex. en DSP där de statistiska beräkningarna görs. En fixtals DSP har ett bestämt antal bitar och fast binärpunkt, vilket introducerar ett större kvantiseringsbrus jämfört med flyttalsoperationer som har en större noggrannhet. Tyngdpunkten på det här arbetet har varit att skapa ett bibliotek av funktioner för att hantera fixtal. En klass har skapats i MATLAB-kod som kan hantera de vanligaste aritmetiska operationerna och lösa minsta-kvadrat-problem. MATLAB:s Fixed-Point Toolbox skulle kunna användas för att lösa den här uppgiften men för att ha full kontroll över algoritmerna och fixtalshanteringen behövs ett eget bibliotek av funktioner som är oberoende av MATLAB:s Fixed-Point Toolbox. Slutsatsen av simuleringen gjord i detta examensarbete är att resultatet av minsta-kvadrat-metoden är mer beroende av antalet heltalsbitar än antalet binaler. / fixtal, telekommunikation, DSP, MATLAB, Fixed-Point Toolbox, minsta-kvadrat-lösning, flyttal, Householder QR faktorisering, saturering, kvantiseringsbrus
293

Integration of Auxiliary Data Knowledge in Prototype Based Vector Quantization and Classification Models

Kaden, Marika 14 July 2016 (has links) (PDF)
This thesis deals with the integration of auxiliary data knowledge into machine learning methods especially prototype based classification models. The problem of classification is diverse and evaluation of the result by using only the accuracy is not adequate in many applications. Therefore, the classification tasks are analyzed more deeply. Possibilities to extend prototype based methods to integrate extra knowledge about the data or the classification goal is presented to obtain problem adequate models. One of the proposed extensions is Generalized Learning Vector Quantization for direct optimization of statistical measurements besides the classification accuracy. But also modifying the metric adaptation of the Generalized Learning Vector Quantization for functional data, i. e. data with lateral dependencies in the features, is considered.
294

ON THE CONVERGENCE AND APPLICATIONS OF MEAN SHIFT TYPE ALGORITHMS

Aliyari Ghassabeh, Youness 01 October 2013 (has links)
Mean shift (MS) and subspace constrained mean shift (SCMS) algorithms are non-parametric, iterative methods to find a representation of a high dimensional data set on a principal curve or surface embedded in a high dimensional space. The representation of high dimensional data on a principal curve or surface, the class of mean shift type algorithms and their properties, and applications of these algorithms are the main focus of this dissertation. Although MS and SCMS algorithms have been used in many applications, a rigorous study of their convergence is still missing. This dissertation aims to fill some of the gaps between theory and practice by investigating some convergence properties of these algorithms. In particular, we propose a sufficient condition for a kernel density estimate with a Gaussian kernel to have isolated stationary points to guarantee the convergence of the MS algorithm. We also show that the SCMS algorithm inherits some of the important convergence properties of the MS algorithm. In particular, the monotonicity and convergence of the density estimate values along the sequence of output values of the algorithm are shown. We also show that the distance between consecutive points of the output sequence converges to zero, as does the projection of the gradient vector onto the subspace spanned by the D-d eigenvectors corresponding to the D-d largest eigenvalues of the local inverse covariance matrix. Furthermore, three new variations of the SCMS algorithm are proposed and the running times and performance of the resulting algorithms are compared with original SCMS algorithm. We also propose an adaptive version of the SCMS algorithm to consider the effect of new incoming samples without running the algorithm on the whole data set. As well, we develop some new potential applications of the MS and SCMS algorithm. These applications involve finding straight lines in digital images; pre-processing data before applying locally linear embedding (LLE) and ISOMAP for dimensionality reduction; noisy source vector quantization where the clean data need to be estimated before the quanization step; improving the performance of kernel regression in certain situations; and skeletonization of digitally stored handwritten characters. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2013-09-30 18:01:12.959
295

Rendu d'images en demi-tons par diffusion d'erreur sensible à la structure

Alain, Benoît 12 1900 (has links)
Le présent mémoire comprend un survol des principales méthodes de rendu en demi-tons, de l’analog screening à la recherche binaire directe en passant par l’ordered dither, avec une attention particulière pour la diffusion d’erreur. Ces méthodes seront comparées dans la perspective moderne de la sensibilité à la structure. Une nouvelle méthode de rendu en demi-tons par diffusion d’erreur est présentée et soumise à diverses évaluations. La méthode proposée se veut originale, simple, autant à même de préserver le caractère structurel des images que la méthode à l’état de l’art, et plus rapide que cette dernière par deux à trois ordres de magnitude. D’abord, l’image est décomposée en fréquences locales caractéristiques. Puis, le comportement de base de la méthode proposée est donné. Ensuite, un ensemble minutieusement choisi de paramètres permet de modifier ce comportement de façon à épouser les différents caractères fréquentiels locaux. Finalement, une calibration détermine les bons paramètres à associer à chaque fréquence possible. Une fois l’algorithme assemblé, toute image peut être traitée très rapidement : chaque pixel est attaché à une fréquence propre, cette fréquence sert d’indice pour la table de calibration, les paramètres de diffusion appropriés sont récupérés, et la couleur de sortie déterminée pour le pixel contribue en espérance à souligner la structure dont il fait partie. / This work covers some important methods in the domain of halftoning: analog screening, ordered dither, direct binary search, and most particularly error diffusion. The methods will be compared in the modern perspective of sensitivity to structure. A novel halftoning method is also presented and subjected to various evaluations. It produces images of visual quality comparable to that of the state-of-the-art Structure-aware Halftoning method; at the same time, it is two to three orders of magnitude faster. First is described how an image can be decomposed into its local frequency content. Then, the basic behavior of the proposed method is given. Next, a carefully chosen set of parameters is presented that allow modifications to this behavior, so as to maximize the eventual reactivity to frequency content. Finally, a calibration step determines what values the parameters should take for any local frequency information encountered. Once the algorithm is assembled, any image can be treated very efficiently: each pixel is attached to its dominant frequency, the frequency serves as lookup index to the calibration table, proper diffusion parameters are retrieved, and the determined output color contributes in expectation to underline the structure from which the pixel comes.
296

Quantification des sous-algèbres de Lie coisotropes / Quantization of coisotropic Lie subalgebras

Ohayon, Jonathan 09 July 2012 (has links)
L’objet de cette thèse est l’étude de l’existence d’une quantification pour les sous-algèbres de Lie coisotropes des bigèbres de Lie. Une sous-algèbre de Lie coisotrope d’une bigèbre de Lie est une sous-algèbre de Lie qui est aussi un coidéal. Le problème de quantifications d’une sous-algèbre de Lie coisotrope fut posé par V. Drinfeld, lors de son étude de la quantification des espaces de Poisson homogènes G/C. Ces deux problèmes sont liés par le principe de dualité établi par N. Ciccoli et F. Gavarini. Dans cette thèse, nous cherchons à résoudre ce problème de quantification dans différents cadres. Premièrement, nous montrons qu’une quantification existe dans le cadre des bigèbres de Lie simple. Nous trouvons une quantification aux sous-algèbres de Lie coisotropes construites par M. Zambon. Puis nous établissons un lien entre ces quantifications et une classification des sous- algèbres coidéales à droite établie par I. Heckenberger et S. Kolb. Deuxièmement, nous trouvons une obstruction à la quantification universelle en utilisant une quantification d’ordre trois construite par V. Drinfeld. Nous montrons que cette obstruction disparait dans les exemples étudiés précédemment. Finalement, nous généralisons un résultat établi par P. Etingof et D. Kazhdan sur la quantification d’espaces de Poisson homogènes, liés aux sous-algèbres Lagrangiennes du double de Drinfeld. / The aim of this thesis is the study of quantization of coisotropic Lie subalgebras of Lie bialgebras.A coisotropic Lie subalgebra of a Lie bialgebra is a Lie subalgebra which is also a Lie coideal. The problem of quantization of coisotropic Lie subalgebra was set forth by V. Drinfeld, in his study of quantization of Poisson homogeneous spaces G/C. These problems are closely related to the duality principle established by N. Ciccoli and F. Gavarini.In this thesis, we search for an answer to this quantization problem in different settings. Firstly, we show that a quantization exists for simple Lie bialgebras by constructing a quantization of examples provided by M. Zambon. We then establish a link between the quantization which we constructed and a classification of subalgebras right coideals established by I. Heckenberger and S. Kolb. Secondly, we find an obstruction to the quantization in the universal setting by using a third-order quantization constructed by V. Drinfeld. We show that this obstruction vanishes in the examples studied earlier. Finally, we generalize a result of P. Etingof and D. Kazhdan on the quantization of poisson homogeneous spaces, linked to Lagrangian Lie subalgebras of Drinfeld's double.
297

Règles de quantification semi-classique pour une orbite périodique de type hyberbolique / Semi-classical quantization rules for a periodic orbit of hyperbolic type

Louati, Hanen 27 January 2017 (has links)
On étudie les résonances semi-excitées pour un Opérateur h-Pseudo-différentiel (h-PDO)H(x, hDx) sur L2(M) induites par une orbite périodique de type hyperbolique à l’énergie E = 0. Par exemple M = Rn et H(x, hDx; h) est l’opérateur de Schrödinger avec effet Stark, ouH(x, hDx; h) est le flot géodesique sur une variété axi-symétrique M, généralisant l’exemplede Poincaré de systèmes Lagrangiens à 2 degrés de liberté. On étend le formalisme de Gérard and Sjöstrand, au sens où on autorise des valeurs propres hyperboliques et elliptiques del’application de Poincaré, et où l’on considère des résonances dont la partie imaginaire est del’ordre de hs, pour 0 < s < 1.On établit une règle de quantification de type Bohr-Sommerfeld au premier ordre en fonction des nombres quantiques longitudinaux (réels) et transverses (complexes), incluantl’intégrale d’action le long de l’orbite, la 1-forme sous-principale, et l’indice de Conley-Zehnder. / In this Thesis we consider semi-excited resonances for a h-Pseudo-Differential Operator (h-PDO for short) H(x, hDx; h) on L2(M) induced by a periodic orbit of hyperbolic type at energy E = 0, as arises when M = Rn and H(x, hDx; h) is Schrödinger operator withAC Stark effect, or H(x, hDx; h) is the geodesic flow on an axially symmetric manifold M,extending Poincaré example of Lagrangian systems with 2 degree of freedom. We generalizethe framework of Gérard and Sjöstrand, in the sense that we allow for hyperbolic and ellipticeigenvalues of Poincaré map, and look for (excited) resonances with imaginary part of magnitude hs, with 0 < s < 1,It is known that these resonances are given by the zeroes of a determinant associatedwith Poincaré map. We make here this result more precise, in providing a first order asymptoticsof Bohr-Sommerfeld quantization rule in terms of the (real) longitudinal and (complex)transverse quantum numbers, including the action integral, the sub-principal 1-form and Gelfand-Lidskii index.
298

Quantification vectorielle en grande dimension : vitesses de convergence et sélection de variables / High-dimensional vector quantization : convergence rates and variable selection

Levrard, Clément 30 September 2014 (has links)
Ce manuscrit étudie dans un premier temps la dépendance de la distorsion, ou erreur en quantification, du quantificateur construit à partir d'un n-échantillon d'une distribution de probabilité via l'algorithme des k-means. Plus précisément, l'objectif de ce travail est de donner des bornes en probabilité sur l'écart entre la distorsion de ce quantificateur et la plus petite distorsion atteignable parmi les quantificateurs, à nombre d'images k fixé, décrivant l'influence des divers paramètres de ce problème: support de la distribution de probabilité à quantifier, nombre d'images k, dimension de l'espace vectoriel sous-jacent, et taille de l'échantillon servant à construire le quantificateur k-mean. Après un bref rappel des résultats précédents, cette étude établit l'équivalence des diverses conditions existantes pour établir une vitesse de convergence rapide en la taille de l'échantillon de l'écart de distorsion considéré, dans le cas des distributions à densité, à une condition technique ressemblant aux conditions requises en classification supervisée pour l'obtention de vitesses rapides de convergence. Il est ensuite prouvé que, sous cette condition technique, une vitesse de convergence de l'ordre de 1/n pouvait être atteinte en espérance. Ensuite, cette thèse énonce une condition facilement interprétable, appelée condition de marge, suffisante à la satisfaction de la condition technique établie précédemment. Plusieurs exemples classiques de distributions satisfaisant cette condition sont donnés, tels les mélanges gaussiens. Si cette condition de marge se trouve satisfaite, une description précise de la dépendance de l'écart de distorsion étudié peut être donné via une borne en espérance: la taille de l'échantillon intervient via un facteur 1/n, le nombre d'images k intervient via différentes quantités géométriques associées à la distribution à quantifier, et de manière étonnante la dimension de l'espace sous-jacent semble ne jouer aucun rôle. Ce dernier point nous a permis d'étendre nos résultats au cadre des espaces de Hilbert, propice à la quantification des courbes. Néanmoins, la quantification effective en grande dimension nécessite souvent en pratique une étape de réduction du nombre de variables, ce qui nous a conduit dans un deuxième temps à étudier une procédure de sélection de variables associée à la quantification. Plus précisément, nous nous sommes intéressés à une procédure de type Lasso adaptée au cadre de la quantification vectorielle, où la pénalité Lasso porte sur l'ensemble des points images du quantificateur, dans le but d'obtenir des points images parcimonieux. Si la condition de marge introduite précédemment est satisfaite, plusieurs garanties théoriques sont établies concernant le quantificateur issu d'une telle procédure, appelé quantificateur Lasso k-means, à savoir que les points images de ce quantificateur sont proches des points images d'un quantificateur naturellement parcimonieux, réalisant un compromis entre erreur en quantification et taille du support des points images, et que l'écart en distorsion du quantificateur Lasso k-means est de l'ordre de 1/n^(1/2) en la taille de l'échantillon. Par ailleurs la dépendance de cette distorsion en les différents autres paramètres de ce problème est donnée explicitement. Ces prédictions théoriques sont illustrées par des simulations numériques confirmant globalement les propriétés attendues d'un tel quantificateur parcimonieux, mais soulignant néanmoins quelques inconvénients liés à l'implémentation effective de cette procédure. / The distortion of the quantizer built from a n-sample of a probability distribution over a vector space with the famous k-means algorithm is firstly studied in this thesis report. To be more precise, this report aims to give oracle inequalities on the difference between the distortion of the k-means quantizer and the minimum distortion achievable by a k-point quantizer, where the influence of the natural parameters of the quantization issue should be precisely described. For instance, some natural parameters are the distribution support, the size k of the quantizer set of images, the dimension of the underlying Euclidean space, and the sample size n. After a brief summary of the previous works on this topic, an equivalence between the conditions previously stated for the excess distortion to decrease fast with respect to the sample size and a technical condition is stated, in the continuous density case. Interestingly, this condition looks like a technical condition required in statistical learning to achieve fast rates of convergence. Then, it is proved that the excess distortion achieves a fast convergence rate of 1/n in expectation, provided that this technical condition is satisfied. Next, a so-called margin condition is introduced, which is easier to understand, and it is established that this margin condition implies the technical condition mentioned above. Some examples of distributions satisfying this margin condition are exposed, such as the Gaussian mixtures, which are classical distributions in the clustering framework. Then, provided that this margin condition is satisfied, an oracle inequality on the excess distortion of the k-means quantizer is given. This convergence result shows that the excess distortion decreases with a rate 1/n and depends on natural geometric properties of the probability distribution with respect to the size of the set of images k. Suprisingly the dimension of the underlying Euclidean space seems to play no role in the convergence rate of the distortion. Following the latter point, the results are directly extended to the case where the underlying space is a Hilbert space, which is the adapted framework when dealing with curve quantization. However, high-dimensional quantization often needs in practical a dimension reduction step, before proceeding to a quantization algorithm. This motivates the following study of a variable selection procedure adapted to the quantization issue. To be more precise, a Lasso type procedure adapted to the quantization framework is studied. The Lasso type penalty applies to the set of image points of the quantizer, in order to obtain sparse image points. The outcome of this procedure is called the Lasso k-means quantizer, and some theoretical results on this quantizer are established, under the margin condition introduced above. First it is proved that the image points of such a quantizer are close to the image points of a sparse quantizer, achieving a kind of tradeoff between excess distortion and size of the support of image points. Then an oracle inequality on the excess distortion of the Lasso k-means quantizer is given, providing a convergence rate of 1/n^(1/2) in expectation. Moreover, the dependency of this convergence rate on different other parameters is precisely described. These theoretical predictions are illustrated with numerical experimentations, showing that the Lasso k-means procedure mainly behaves as expected. However, the numerical experimentations also shed light on some drawbacks concerning the practical implementation of such an algorithm.
299

Reconhecimento automático de locutor em modo independente de texto por Self-Organizing Maps. / Text independent automatic speaker recognition using Self-Organizing Maps.

Mafra, Alexandre Teixeira 18 December 2002 (has links)
Projetar máquinas capazes identificar pessoas é um problema cuja solução encontra uma grande quantidade de aplicações. Implementações em software de sistemas baseados em medições de características físicas pessoais (biométricos), estão começando a ser produzidos em escala comercial. Nesta categoria estão os sistemas de Reconhecimento Automático de Locutor, que se usam da voz como característica identificadora. No presente momento, os métodos mais populares são baseados na extração de coeficientes mel-cepstrais (MFCCs) das locuções, seguidos da identificação do locutor através de Hidden Markov Models (HMMs), Gaussian Mixture Models (GMMs) ou quantização vetorial. Esta preferência se justifica pela qualidade dos resultados obtidos. Fazer com que estes sistemas sejam robustos, mantendo sua eficiência em ambientes ruidosos, é uma das grandes questões atuais. Igualmente relevantes são os problemas relativos à degradação de performance em aplicações envolvendo um grande número de locutores, e a possibilidade de fraude baseada em vozes gravadas. Outro ponto importante é embarcar estes sistemas como sub-sistemas de equipamentos já existentes, tornando-os capazes de funcionar de acordo com o seu operador. Este trabalho expõe os conceitos e algoritmos envolvidos na implementação de um software de Reconhecimento Automático de Locutor independente de texto. Inicialmente é tratado o processamento dos sinais de voz e a extração dos atributos essenciais deste sinal para o reconhecimento. Após isto, é descrita a forma pela qual a voz de cada locutor é modelada através de uma rede neural de arquitetura Self-Organizing Map (SOM) e o método de comparação entre as respostas dos modelos quando apresentada uma locução de um locutor desconhecido. Por fim, são apresentados o processo de construção do corpus de vozes usado para o treinamento e teste dos modelos, as arquiteturas de redes testadas e os resultados experimentais obtidos numa tarefa de identificação de locutor. / The design of machines that can identify people is a problem whose solution has a wide range of applications. Software systems, based on personal phisical attributes measurements (biometrics), are in the beginning of commercial scale production. Automatic Speaker Recognition systems fall into this cathegory, using voice as the identifying attribute. At present, the most popular methods are based on the extraction of mel-frequency cepstral coefficients (MFCCs), followed by speaker identification by Hidden Markov Models (HMMs), Gaussian Mixture Models (GMMs) or vector quantization. This preference is motivated by the quality of the results obtained by the use of these methods. Making these systems robust, able to keep themselves efficient in noisy environments, is now a major concern. Just as relevant are the problems related to performance degradation in applications with a large number of speakers involved, and the issues related to the possibility of fraud by the use of recorded voices. Another important subject is to embed these systems as sub-systems of existing devices, enabling them to work according to the operator. This work presents the relevant concepts and algorithms concerning the implementation of a text-independent Automatic Speaker Recognition software system. First, the voice signal processing and the extraction of its essential features for recognition are treated. After this, it is described the way each speaker\'s voice is represented by a Self-Organizing Map (SOM) neural network, and the comparison method of the models responses when a new utterance from an unknown speaker is presented. At last, it is described the construction of the speech corpus used for training and testing the models, the neural network architectures tested, and the experimental results obtained in a speaker identification task.
300

Quantização de sistemas não-Lagrangianos e mecânica quântica não-comutativa / Quantization of non-Lagrangian systems and noncommutative quantum mechanics

Kupriyanov, Vladislav 23 March 2009 (has links)
Nesta tese apresentamos três problemas interligados: a quântização de teorias não-Lagrangianos, a mecânica quântica não-comutativa (MQNC) e a construção do produto estrela atravéz do ordenamento de Weyl. No contexto do primeiro problema foi elaborada uma abordagem da quantização canônica de sistemas com as equações de movimento não-Lagrangianas. Construímos um princípio da ação mínima para um sistema equivalente das equações diferenciais de primeira ordem. Existe uma ambiguidade não-trivial (que não se reduz a uma derivada total) na definição da função de Lagrange para os sistemas de equações de primeira ordem. Apresentamos uma descrição completa desta ambiguidade. O esquema proposto é aplicado para a quantização da teoria quadrática geral. Também foi construida a quantização do oscilador harmônico amortecido e da carga elétrica com radiação. No contexto da MQNC elaboramos uma formulação da integral de trajetória da MQNC relativística e construímos a generalização não-comutativa da ação da super-partícula. A quantização da ação proposta fornece as equações de Klein-Gordon e de Dirac nas teorias de campo não-comutativas. No contexto do terceiro problema desenvolvemos uma abordagem para a quantização por deformação no plano real com uma estrutura de Poisson arbitrária baseada no ordenamento simétrico dos produtos dos operadores. É formulado um procedimento iterativo simples e efetivo para a construção do produto estrela. Este procedimento nos permitiu calcular o produto estrela em ordens altas (em terceira e quarta ordens), algo que foi feito pela primeira vez. Exceto por uma análise da cohomologia, que não consideramos no artigo, o método proposto dá uma descrição explicita, na linguagem matemática usual da física, do produto estrela. / We present here three interrelated problems: quantization of non-Lagrangian theories, noncommutative quantum mechanics (NCQM) and a constructions of the star product trough the the Weyl ordering. In the context of the first problem an approach to the canonical quantization of systems with non-Lagrangian equations of motion is proposed. We construct an action principle for an equivalent first-order equations of motion. There exists an ambiguity (not reducible to a total time derivative) in associating a Lagrange function with the given set of equations. We give a complete description of this ambiguity. The proposed scheme is applied to quantization of a general quadratic theory. Also the quantization of a damped oscillator and a radiating point-like charge is constructed. In the context of NCQM we propose a path integral formulation of relativistic NCQM and construct a noncommutative generalization of superparticle action. After quantization, the proposed action reproduces the Klein-Gordon and Dirac equations in the noncommutative field theories. In the context of the third problem we develop an approach to the deformation quantization on the real plane with an arbitrary Poisson structure which based on Weyl symmetrically ordered operator products. A simple and effective iterative procedure of the construction of star products is formulated. This procedure allowed us to calculate the third and the fourth order star products. Modulo some cohomology issues which we do not consider here, the method gives an explicit and physics-friendly description of the star products.

Page generated in 0.0446 seconds